.
Anmeldung | Registrieren | Hilfe

Blogger

.NET-Blog Archiv

.NET Developer Blogs

Microsofts PowerShell ist jetzt OpenSource und für Linux und OS X verfügbar

22.08.2016 10:05:52 | Steffen Steinbrecher

Die auf dem .NET-Framework basierende PowerShell wurde jetzt von Microsoft als OpenSource-Projekt freigegeben. Das GitHub Repository ist hier zu finden: https://github.com/PowerShell/PowerShell Darüber hinaus steht die PowerShell jetzt plattformübergreifend für OS X und Linux zur Verfügung. Die Downloads für die unterschiedlichen Plattformen sind über das GitHub-Repository erhältlich. Die PowerShell verbindet die aus Unix-Shells bekannte Philosophie von […]

C#: Exception Handling bei asynchronen Methoden (async/await)

15.08.2016 18:05:23 | Steffen Steinbrecher

Bei der Verwendung von async/await können Methoden drei unterschiedliche Rückgabetypen besitzen: Task, Task<T> oder void. In den Best Practices wird jetzt immer geschrieben: „Vermeide den Rückgabetyp void und gib immer ein Task-Objekt zurück!“ Doch warum ist das so? Das soll jetzt im nachfolgenden Artikel etwas genauer erläutert werden. Wie in der Einleitung schon geschrieben können […]

TFS 2015: Adding a new Windows Build Agent

11.08.2016 03:45:00 |

The TFS 2015 Build System

The build system before TFS 2015 was based on a pretty arcane XAML workflow engine which was manageable, but not fun to use. With TFS 2015 a new build system was implemented, which behave pretty much the same way as other build systems (e.g. TeamCity or AppVeyor).

The “build workflow” is based on a simple “task”-concept.

There are many related topics in the TFS world, e.g. Release-Management, but this blogpost will just focus on the “Getting the system ready”-part.

TFS Build Agents

Like the other parts of Microsoft the TFS is now also in the cross-platform business. The build system in TFS 2015 is capable of building a huge range of languages. All you need is a compatible build agent.

My (simple) goal was to build a .NET application on a Windows build agent via the new TFS 2015 build system.

Step 1: Adding a new build agent

Important - Download Agent.

This one is maybe the hardest part. Instead of a huge TFS-Agent-Installer.msi you need to navigate inside the TFS control panel to the “Agent pool”-tab.

You need at least one pool and need to click the “Download Agent” button.

Step 2: Configure the agent

Configuration.

The .zip package contains the actual build agent executable and a .cmd file.

Invoke the “ConfigureAgent.cmd”-file:

We run those agents as Windows Service (which was one of the last config-questions) and are pretty happy with the system.

Step 3: You are done

Now your new build agent should appear under the given build agent pool:

TFS Build Agents.

After googleing around I also found the corresponding TFS HowTo, which describes more or less the complete setup. Well… now it is documented on MSDN and this blog. Maybe this will help my future-self ;)

Setup Angular2 & TypeScript in a ASP.​NET Core project using Visual Studio

07.08.2016 21:00:00 | Jürgen Gutsch

In this post I try to explain, how to setup a ASP.NET Core project with Angular2 and typescript in Visual Studio 2015.

There are two ways to setup an Angular2 Application: The most preferred way is to use angular-cli, which is pretty simple. Unfortunately the Angular CLI doesn't use the latest version . The other way is to follow the tutorial on angular.io, which sets-up a basic starting point, but this needs a lot of manually steps. There are also two ways to setup the way you want to develop your app with ASP.NET Core: One way is to separate the client app completely from the server part. It is pretty useful to decouple the server and the client, to create almost independent applications and to host it on different machines. The other way is to host the client app inside the server app. This is useful for small applications, to have all that stuff in one place and it is easy to deploy on a single server.

In this post I'm going to show you, how you can setup Angular2 app, which will be hosted inside an ASP.NET Core application using Visual Studio 2015. Using this way, the Angular-CLI is not the right choice, because it already sets up a development environment for you and all that stuff is configured a little bit different. The effort to move this to Visual Studio would be to much. I will almost follow the tutorial on http://angular.io/. But we need to change some small things to get that stuff working in Visual Studio 2015.

Configure the ASP.NET Core project

Let's start with a new ASP.NET Core project based on .NET Core. (The name doesn't matter, so "WebApplication391" is fine). We need to choose a Web API project, because the client side Angular2 App will probably communicate with that API and we don't need all the predefined MVC stuff.

A Web API project can't serve static files like JavaScripts, CSS styles, images, or even HTML files. Therefore we need to add a reference to Microsoft.AspNetCore.StaticFiles in the project.json:

"Microsoft.AspNetCore.StaticFiles": "1.0.0 ",

And in the startup.cs, we need to add the following line, just before the call of `UseMvc()

app.UseStaticFiles();

Another important thing we need to do in the startup.cs, is to support the Routing of Angular2. If the Browser calls a URL which doesn't exists on the server, it could be a Angular route. Especially if the URL doesn't contain a file extension.

This means we need to handle the 404 error, which will occur in such cases. We need to serve the index.html to the client, if there was an 404 error, on requests without extensions. To do this we just need a simple lambda based MiddleWare, just before we call UseStaticFiles():

app.Use(async (context, next) =>
{
    await next();

    if (context.Response.StatusCode == 404
        && !Path.HasExtension(context.Request.Path.Value))
    {
        context.Request.Path = "/index.html";
        await next();
    }
});

Inside the properties folder we'll find a file called launchSettings.json. This file is used to configure the behavior of visual Studio 2015, when we press F5 to run the application. Remove all strings "api/values" from this file. Because Visual Studio will always call that specific Web API, every time you press F5.

Now we prepared the ASP.NET Core application to start to follow the angular.io tutorial.:

Let's start with the NodeJS packages. Using Visual Studio we can create a new "npm Configuration file" called package.json. Just copy the stuff from the tutorial:

{
  "name": "dotnetpro-ecollector",
  "version": "1.0.0",
  "scripts": {
    "start": "tsc && concurrently \"npm run tsc:w\" \"npm run lite\" ",
    "lite": "lite-server",
    "postinstall": "typings install && gulp restore",
    "tsc": "tsc",
    "tsc:w": "tsc -w",
    "typings": "typings"
  },
  "license": "ISC",
  "dependencies": {
    "@angular/common": "2.0.0-rc.4",
    "@angular/compiler": "2.0.0-rc.4",
    "@angular/core": "2.0.0-rc.4",
    "@angular/forms": "0.2.0",
    "@angular/http": "2.0.0-rc.4",
    "@angular/platform-browser": "2.0.0-rc.4",
    "@angular/platform-browser-dynamic": "2.0.0-rc.4",
    "@angular/router": "3.0.0-beta.1",
    "@angular/router-deprecated": "2.0.0-rc.2",
    "@angular/upgrade": "2.0.0-rc.4",
    "systemjs": "0.19.27",
    "core-js": "^2.4.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "zone.js": "^0.6.12",
    "angular2-in-memory-web-api": "0.0.14",
    "es6-promise": "^3.1.2",
    "es6-shim": "^0.35.0",
    "jquery": "^2.2.4",
    "bootstrap": "^3.3.6"
  },
  "devDependencies": {
    "gulp": "^3.9.1",
    "concurrently": "^2.0.0",
    "lite-server": "^2.2.0",
    "typescript": "^1.8.10",
    "typings": "^1.0.4"
  }
}

In this listing, I changed a few things:

  • I added "&& gulp restore" to the postinstall script
  • I also added Gulp to the devDependency to typings

After the file is saved, Visual Studio tryies to load all the packages. This works, but VS shows a yellow exclemation mark because of any arror. Until yet, I didn't figure out what is going wrong here. To be sure all packages are propery installed, use the console, change directory to the current project and type npm install

The post install will possibly faile because gulp is not yet configured. We need gulp to copy the dependencies to the right location inside the wwwroot folder, because static files will only be loaded from that location. This is not part of the tutorial on angular.io, but is needed to fit the client stuff into Visual Studio. Using Visual Studio we need to create a new "gulp Configuration file" with the name gulpfile.js:

var gulp = require('gulp');

gulp.task('default', function () {
    // place code for your default task here
});

gulp.task('restore', function() {
    gulp.src([
        'node_modules/@angular/**/*.js',
        'node_modules/angular2-in-memory-web-api/*.js',
        'node_modules/rxjs/**/*.js',
        'node_modules/systemjs/dist/*.js',
        'node_modules/zone.js/dist/*.js',
        'node_modules/core-js/client/*.js',
        'node_modules/reflect-metadata/reflect.js',
        'node_modules/jquery/dist/*.js',
        'node_modules/bootstrap/dist/**/*.*'
    ]).pipe(gulp.dest('./wwwroot/libs'));
});

The task restore, copies all the needed files to the Folder ./wwwroot/libs

TypeScript needs some type definitions to get the types and API definitions of the libraries, which are not written in TypeScript or not available in TypeScript. To load this, we use another tool, called "typings". This is already installed with NPM. This tool is a package manager for type definition files. We need to configure this tool with a typings.config

{
  "globalDependencies": {
    "es6-shim": "registry:dt/es6-shim#0.31.2+20160317120654",
    "jquery": "registry:dt/jquery#1.10.0+20160417213236"
  }
}

No we have to configure typescript itself. We can also add a new item, using Visual Studio to create a TyoeScript configuration file. I would suggest not to use the default content, but the contents from the angular.io tutorial.

{
  "compileOnSave": true,
  "compilerOptions": {
    "target": "es5",
    "module": "commonjs",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": false,
    "noImplicitAny": false
  },
  "exclude": [
    "node_modules"
  ]
}

The only things I did with this file, is to add the "compileOnSave" flag and to exclude the "node_modules" folder from the TypeScript build, because we don't need to build containing the TypeScript files and because we moved the needed JavaScripts to ./wwwroot/libs.

If you use Git or any other source code repository, you should ignore the files generated out of our TypeScript files. In case of Git, I simply add another .gitignore to the ./wwwroot/app folder

#remove generated files
*.js
*.map

We do this becasue the JavaScript files are only relevant to run the applicaiton and should be created automatically in the development environment or on a build server, befor deploying the app.

The first app

That is all to prepare a ASP.NET Core project in Visual Studio 2015. Let's start to create the Angular app. The first step is to create a index.html in the folder wwwroot:

<html>
<head>
    <title>dotnetpro eCollector</title>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet"http://feedproxy.google.com href="css/styles.css">
    <!-- 1. Load libraries -->
    <!-- Polyfill(s) for older browsers -->
    <scripthttp://feedproxy.google.com src="libs/shim.min.js"></script>
    <scripthttp://feedproxy.google.com src="libs/zone.js"></script>
    <scripthttp://feedproxy.google.com src="libs/Reflect.js"></script>
    <scripthttp://feedproxy.google.com src="libs/system.src.js"></script>
    <!-- 2. Configure SystemJS -->
    <scripthttp://feedproxy.google.com src="systemjs.config.js"></script>
    <script>
        System.import('app')
            .catch(function (err) { console.error(err); });
    </script>
</head>
<!-- 3. Display the application -->
<body>
    <my-app>Loading...</my-app>
</body>
</html>

As you can see, we load almost all JavaScript files from the libs folder. Except a systemjs.config.js. This file is needed to configure Angular2, to define which module is needed, where to find dependencies an so on. Create a new JavaScript file, call it systemjs.config.js and paste the following content into it:

(function (global) {

    // map tells the System loader where to look for things
    var map = {
        'app': 'app', 
        'rxjs': 'lib/rxjs',
        '@angular': 'lib/@angular'
    };

    // packages tells the System loader how to load when no filename and/or no extension
    var packages = {
        'app': { main: 'main.js', defaultExtension: 'js' },
        'rxjs': { defaultExtension: 'js' },
        'angular2-in-memory-web-api': { defaultExtension: 'js' },
    };

    var packageNames = [
      '@angular/common',
      '@angular/compiler',
      '@angular/core',
      '@angular/http',
      '@angular/platform-browser',
      '@angular/platform-browser-dynamic',
      '@angular/router',
      '@angular/router-deprecated',
      '@angular/upgrade'
    ];

    packageNames.forEach(function (pkgName) {
        packages[pkgName] = { main: 'index.js', defaultExtension: 'js' };
    });

    var config = {
        map: map,
        packages: packages
    }

    // filterSystemConfig - index.html's chance to modify config before we register it.
    if (global.filterSystemConfig) { global.filterSystemConfig(config); }

    System.config(config);

})(this);

This file also defines a main entry point which is a main.js. This file is the transpiled TypeScript file main.ts we need to create in the next step. The main.ts bootstraps our Angular2 app:

import { bootstrap } from '@angular/platform-browser-dynamic';
import { AppComponent } from './app.component';

bootstrap(AppComponent);

We also need to create our first Angular2 component. Create a TypeScript file with the name app.component.ts inside the app folder:

import { Component } from '@angular/core';

@Component({
  selector: 'my-app',
  template: '<h1>My first Angular App in Visual Studio</h1>'
})
export class AppComponent { }

If all works fine, Visual Studio should have created a JavaScript file for each TypeScript file. Also the build should run. Pressing F5 should start the Application and a Browser should open.

A short moment the Loading... is visible in the browser. After the app is initialized and all the Angular2 magic happened, you'll see the contents of the template defined in the app.component.ts.

Conclusion

I propose to use VisualStudio just for small single page applications, because it gets slower the more dynamic files need to be handled. ASP.NET Core is pretty cool to handle dynamically generated files, but Visual Studio still is not. VS tries to track and manage all the files inside the project, which slows down a lot. One solution is to disable source control in Visual Studio and use an external tool to manage the sources.

Another - even better - solution is not to use Visual Studio for front-end development. In a new project, I propose to separate front-end and back-end development and to use Visual Studio Code for the front-end development or even both. You need to learn a few things about NPM, Gulp and you need to use a console in this case, but web development will be a lot faster and a lot more lightweight with this approach. In one of the next posts, I'll show how I currently work with Angular2.

.NET Framework 4.6.2 erschienen

06.08.2016 18:51:17 | Steffen Steinbrecher

Das .NET Framework ist in der Version 4.6.2 erschienen. Es gibt Neuerungen in den folgenden Bereichen: Base Class Library Common Language Runtime ClickOnce ASP.NET SQL Windows Presentation Foundation Windows Communication Foundation Hier mal ein paar neue Features: Long Path Support (es werden jetzt Pfade mit mehr als 260 Zeichen in der System.IO API unterstützt) TLS […]

GZip für WebAPI 2 aktivieren

04.08.2016 10:08:00 | Martin Hey

Mit GZip-Komprimierung kann man einiges an Netzwerklast minimieren - insbesondere, wenn größere Datenmengen übertragen werden sollen, in denen häufig ähnliche Worte vorkommen. Auch wenn das meist genutzte JSON-Format nicht ganz so geschwätzig ist wie beispielsweise SOAP, so kommen doch auch hier beispielsweise die Eigenschaftsnamen in jedem Objekt wieder vor. Wenn man dann eine Liste von Objekten überträgt, so ergibt sich hier ein Einsparungspotenzial.

Im  Gegensatz zu Content-Negotiation, die die Web-API selbst übernimmt, gibt es offenbar keinen Automatismus für die Gzip-Komprimierung. Eine Suche ergab, verschiedene Lösungsansätze - z.B. den von Ben Foster oder den von Radenko Zec. Letzterer ist Vorlage für meine jetzige Lösung geworden.

Schritt 1 - Erzeugen eines ActionFilters, der die Response ändert
public class CompressionAttribute : ActionFilterAttribute
{
    public override async Task OnActionExecutedAsync(HttpActionExecutedContext context, CancellationToken cancellationToken)
    {
        var acceptEncoding = context.Request.Headers.AcceptEncoding;
        var acceptsGzip = acceptEncoding.Contains(new System.Net.Http.Headers.StringWithQualityHeaderValue("gzip"));

        if (!acceptsGzip)
        {
            return;
        }

        var content = context.Response.Content;
        if (content == null)
        {
            return;
        }

        var headers = context.Response.Content.Headers;
        var bytes = await content.ReadAsByteArrayAsync();

        var zlibbedContent = (await Compress(bytes)) ?? new byte[0];
        context.Response.Content = new ByteArrayContent(zlibbedContent);

        foreach (var header in headers)
        {
            if (header.Key.Equals("Content-Length", StringComparison.OrdinalIgnoreCase))
            {
                continue;
            }
            context.Response.Content.Headers.Add(header.Key, header.Value);
        }
        context.Response.Content.Headers.Add("Content-Encoding", "gzip");
    }

    private static async Task Compress(byte[] value)
    {
        if (value == null)
        {
            return null;
        }

        using (var output = new MemoryStream())
        {
            using (var gzipStream = new GZipStream(output, CompressionMode.Compress, CompressionLevel.BestSpeed))
            {
                gzipStream.FlushMode = FlushType.Finish;
                await gzipStream.WriteAsync(value, 0, value.Length);
                   
            }
            return output.ToArray();
        }
    }
}
Um die Komprimierung selbst kümmert sich Ionic.Zlib

Die Implementierung ist an sich recht einfach. Zunächst wird geprüft, ob der Client Gzip-Encoding akzeptiert. Ist das nicht der Fall, dann verändert das Attribut die Ausgabe nicht. Anderenfalls wird die Ausgabe komprimiert und die Response neu aufgebaut. Durch die Neuzuweisung der Response werden auch alle bisherigen Content-Type-Header verworfen, weswegen diese im Nachgang wieder gesetzt werden müssen.

Im Anschluss wird dann dem Client noch mitgeteilt, dass es sich um komprimierten Inhalt handelt.

Schritt 2 - ActionFilter anwenden
Der ActionFilter kann nun verwendet werden, indem eine Web-API-Methode damit annotiert wird oder man aktiviert ihn global für alle Anfragen.
public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        ...

        config.Filters.Add(new CompressionAttribute());

        ...
    }
}
Das passiert in der WebApiConfig.

Add HTTP headers to static files in ASP.​NET Core

03.08.2016 21:00:00 | Jürgen Gutsch

Usually, static files like JavaScript, CSS, images and so on, are cached on the client after the first request. But sometimes, you need to disable the cache or to add a special cache handling.

To provide static files in a ASP.NET Core application, you use the StaticFileMiddleware:

app.UseStaticFiles();

This extension method has two overloads. One of them needs a StaticFileOptions instance, which is our friend in this case. This options has a property called OnPrepareResponse of type Action<StaticFileResponseContext>. Inside this Action, you have access to the HttpContext and many more. Let's see how it looks like to set the cache life time to 12 hours:

app.UseStaticFiles(new StaticFileOptions()
{
    OnPrepareResponse = context =>
    {
        context.Context.Response.Headers["Cache-Control"] = 
                "private, max-age=43200";

        context.Context.Response.Headers["Expires"] = 
                DateTime.UtcNow.AddHours(12).ToString("R");
    }
});

With the StaticFileResponseContext, you also have access to the file of the currently handled file. With this info, it is possible to manipulate the HTTP headers just for a specific file or file type.

This approach ensures, that the client doesn't use pretty much outdated files, but use cached versions while working with it. We use this in a ASP.NET Core single page application, which uses many JavaScript, and HTML template files. In combination with continuous deployment, we need to ensure the Application uses the latest files.

Building a home robot: Part 1 - introduction and head

31.07.2016 17:00:00 | Daniel Springwald

(see all parts of "building a home robot")

Since I was a child and saw the tomy omnibot I was fascinated about robot companions. In the following years I took some tries to get or build one – for example the Sony Aibo in 2001 or my Cobra Robot.

Some weeks ago I started my first serious project of a self driving, self charging home robot - including things like face detection and voice output.

My girlfriend was not sure if it would be a little bit scary to live together with a home robot. So I decided to create a first prank design for the robot head especially for her ;-)


(A scary prank design created from old animatronics parts)

The real head will be much more cute and abstract, using a small monitor instead of physical eyeballs.

As the robot “brain” I choose a Raspberry PI 3 because it is small, fast and with low power consumption.  Other reasons were: cheap and native camera, hardware connections via the PIO port and the available official touchscreen monitor.

To create most of the body parts I want to use my 3D printer.

In the past I often used 3d editor programs like Autodesk 3D Studio or Cinema 4D – but never real CAD programs. After some research for a low cost but useful CAD program I found OpenSCAD.

This free open source tool and its unusual approach to create objects by writing program code suits perfect to my needs.

Only 3 hours later I had finished my first CAD model of a head prototype and started the 3d printer. (I plan to upload all the STL files to thingiverse.com to share them with other makers who want to built their own home robots)

The first prototype of the robot front head:

The following 4 improved prototypes:

An early prototype showing the complete head shape:

The final front part without raspberry pi...

...and with raspberry pi and the touchscreen monitor:


The next step will be the neck design and the motors for moving the head and the neck.

Comming soon: Continue reading: Part 2 - Neck design and movement

C#: BlockingCollection am Beispiel MetroFtpClient

24.07.2016 17:27:25 | Steffen Steinbrecher

Innerhalb des MetroFtpClients (https://github.com/steve600/MetroFtpClient) gibt es eine Warteschlange um die auszuführenden Up- und Downloads zu verwalten. Bei der Abarbeitung der Warteschlange wünscht man sich nun oft einen gewissen Grad an Parallelität um die Performance zu steigern (z.B. mehrere simultane Downloads). Mit .NET 4.0 hat Microsoft einen großen Schritt in diese Richtung getan und den Entwicklern […]

How to continuously deploy a ASP.​NET Core 1.0 web app to Microsoft Azure

21.07.2016 21:00:00 | Jürgen Gutsch

We started the first real world project with ASP.NET Core RC2 a month ago and we learned a lot of new stuff around ASP.NET Core

  • Continuous Deployment to an Azure Web App
  • Token based authentication with Angular2
  • Setup Angular2 & TypeScript in a ASP.NET Core project
  • Entity Framework Core setup and initial database seeding

In this post, I'm going to show you how we setup a continuous deployment stuff for a ASP.NET Core 1.0 project, without tackling TypeScript and Angular2. Please Remember: The tooling around .NET Core and ASP.NET Core is still in "preview" and will definitely change until RTM. I'll try to keep this post up-to-date. I wont use the direct deployment to an Azure Web App from a git repository because of some reasons, I mentioned in a previous post.

I will write some more lines about the other learned stuff in one of the next posts.

Let's start with the build

Building is the easiest part of the entire deployment process. To build a ASP.NET Core 1.0, solution you are able to use MSBuild.exe. Just pass the solution file to MSBuild and it will build all projects in the solution.

The *.xproj files use specific targets, which will wrap and use the dotnet CLI. You are also able to use the dotnet CLI directly. Just call dotnet build for each project, or just simpler: call dotnet build in the solution folder and the tools will recursively go threw all sub-folders, to look for project.json files and build all the projects in the right build order.

Usually I define an output path to build all the projects into a specific folder. This makes it a lot easier for the next step:

Test the code

Some months ago, I wrote about unit testing DNX libraries (Xunit, NUnit). This didn't really change in .NET Core 1.0. Depending on the Test Framework, a test library could be a console application, which can be called directly. In other cases the test runner is called, which gets the test libraries passed as arguments. We use NUnit to create our unit tests, which doesn't provide a separate runner yet for .NET Core. All of the test libraries are console apps and will build to a .exe file. So we are searching the build output folder for our test libraries and call them one by one. We also pass the test output file name to that libraries, to get detailed test results.

This is pretty much all to run the unit tests.

Throw it to the clouds

Deployment was a little more tricky. But we learned how to do it, from the Visual Studio output. If you do a manual publish with Visual Studio, the output window will tell you how the deployment needs to be done. This are just two steps:

###1. publish to a specific folder using the "dotnet publish" command We are calling dotnet publish with this arguments:

Shell.Exec("dotnet", "publish \"" + webPath + "\" --framework net461 --output \"" + 
    publishFolder + "\" --configuration " + buildConf, ".");
  • webPath contains the path to the web project which needs to be deployed
  • publishFolder is the publish target folder
  • buildConf defines the Debug or Release build (we build with Debug in dev environments)

###2. use msdeploy.exe to publish the complete publish folder to a remote machine. The remote machine in our case, is an instance of an Azure Web App, but could also be any other target machine. msdeploy.exe is not a new tool, but is still working, even with ASP.NET Core 1.0.

So we just need to call msdeploy.exe like this:

Shell.Exec(msdeploy, "-source:contentPath=\"" + publishFolder + "\" -dest:contentPath=" + 
    publishWebName + ",ComputerName=" + computerName + ",UserName=" + username + 
    ",Password=" + publishPassword + ",IncludeAcls='False',AuthType='Basic' -verb:sync -" + 
    "enablerule:AppOffline -enableRule:DoNotDeleteRule -retryAttempts:20",".")
  • msdeploy containes the path to the msdeploy.exe which is usually C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe.
  • publishFolder is the publish target folder from the previous command.
  • publishWebName is the name of the Azure Web App name, which also is the target content path.
  • computername is the name/URL of the remote machine. In our case "https://" + publishWebName + ".scm.azurewebsites.net/msdeploy.axd"
  • username and password are the deployment credentials. the password is hashed, as in the publish profile that you can download from Azure. Just copy paste the hashed password.

conclusion

I didn't mention all the work that needs to be done to prepare the web app. We also use Angular2 with TypeScript. So we also need to get all the NPM dependencies, we need to move the needed files to the wwwroot folder and we need to bundle and to minify all the JavaScript files. This is also done in our build & deployment chain. But in this post, it should be enough to describe just the basic steps for a usual ASP.NET Core 1.0 app.

OpenSource: Vorstellung MetroFtpClient

19.07.2016 12:41:08 | Steffen Steinbrecher

In diesem Beitrag möchte ich mal ein kleines Tool für FTP-Zugriffe vorstellen. Als Ausgangsbasis für den MetroFtpClient (https://github.com/steve600/MetroFtpClient) diente das PrismMahAppsSample (https://github.com/steve600/PrismMahAppsSample) und die Standard .NET-Klassen FtpWebRequest/FtpWebRespsonse. Auch für dieses Projekt wurden wieder einige OpenSource-Projekte verwendet. Hier mal eine Übersicht: Dragablz – https://github.com/ButchersBoy/Dragablz MahApps.Metro – https://github.com/MahApps/MahApps.Metro MaterialDesignInXAMLToolkit – https://github.com/ButchersBoy/MaterialDesignInXamlToolkit Newtonsoft.Json – https://github.com/JamesNK/Newtonsoft.Json OxyPlot – https://github.com/oxyplot/oxyplot […]

Visual Studio Code 1.3 - Tabs, Extensions View und mehr Neuigkeiten

14.07.2016 12:30:29 | Kay Giza

Visual Studio Code (VSCode) hat mir Version 1.3 einige entscheidende Verbesserungen und Highlights erhalten. In diesem Blogeintrag möchte ich einige der Neuigkeiten vorstellen. Eine... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Working with user secrets in ASP.​NET Core applications.

10.07.2016 21:00:00 | Jürgen Gutsch

In the past there was a study about critical data in GitHub projects. They wrote a crawler to find passwords, user names and other secret stuff in projects on GitHub. And they found a lot of such data in public projects, even in projects of huge companies, which should pretty much care about security.

The most of this credentials are stored in .config files. For sure, you need to configure the access to a database somewhere, you also need to configure the credentials to storages, mail servers, ftp, what ever. In many cases this credentials are used for development, with lot more rights than the production credentials.

Fact is: Secret information shouldn't be pushed to any public source code repository. Even better: not pushed to any source code repository.

But what is the solution? How should we tell our app where to get this secret information?

On Azure, you are able to configure your settings directly in the application settings of your web app. This overrides the settings of your config file. It doesn't matter if it's a web.config or an appsettings.json.

But we can't do the same on the local development machine. There is no configuration like this. How and where do we save secret credentials?

With .Core, there is something similar now. There is a SecretManager tool, provided by the .NET Core SDK (Microsoft.Extensions.SecretManager.Tools), which you can access with the dotnet CLI.

This tool stores your secrets locally on your machine. This is not a high secure password manager like keypass. It is not really high secure, but on your development machine, it provides the possibility NOT to store your secrets in a config file inside your project. And this is the important thing here.

To use the SecretManager tool, you need to add that tool in the "Tools" section of your project.json, like this:

"Microsoft.Extensions.SecretManager.Tools": {
  "version": "1.0.0-preview2-final",
  "imports": "portable-net45+win8+dnxcore50"
},

Be sure you have a userSecretsId in your project.json. With this ID the SecretManager tool assigns the user secrets to your app:

"userSecretsId": "aspnet-UserSecretDemo-79c563d8-751d-48e5-a5b1-d0ec19e5d2b0",

If you create a new ASP.NET Core project with Visual Studio, the SecretManager tool is already added.

Now you just need to access your secrets inside your app. In a new Visual Studio project, this should also already done and look like this:

public Startup(IHostingEnvironment env)
{
    _hostingEnvironment = env;

    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    if (env.IsDevelopment())
    {
        // For more details on using the user secret store see 
        // http://go.microsoft.com/fwlink/?LinkID=532709
        builder.AddUserSecrets();

        // This will push telemetry data through Application 
        // Insights pipeline faster, allowing you to view results 
        // immediately.
        builder.AddApplicationInsightsSettings(developerMode: true);
    }

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

If not arr a NuGet reference to Microsoft.Extensions.Configuration.UserSecrets 1.0.0 in your project.json and add builder.AddUserSecrets(); as shown here.

The Extension Method AddUserSecrets() loads the secret information of that project into the ConfigurationBuilder. If the keys of the secrets are equal to the keys in the previously defined appsettings.json, the app settings will be overwritten.

If this all is done you are able to use the tool to store new secrets:

dotnet user-secrets set key value

If you create a separate section in your appsettings.config as equal to the existing settings, you need to combine the user secret key with the sections name and the settings name, separated by a colon.

I created settings like this:

"AppSettings": {
    "MySecretKey": "Hallo from AppSettings",
    "MyTopSecretKey": "Hallo from AppSettings"
},

To overwrite the keys with the values from the SecretManager tool, I need to create entries like this:

dotnet user-secrets set AppSettings:MySecretKey "Hello from UserSecretStore"
dotnet user-secrets set AppSettings:MyTopSecretKey "Hello from UserSecretStore"

BTW: to override existing keys with new values, just call set the secret again with the same key and the new value.

This way to handle secret data works pretty fine for me.

The SecretManager tool knows three more commands:

  • dotnet user-secrets clear: removes all secrets from the store
  • dotnet user-secrets list: shows you all existing keys
  • dotnet user-secrets remove <key>: removes the specific key

Just type dotnet user-secrets --help to see more information about the existing commands.

If you need to handle some more secrets in your project, it possibly makes sense to create a small batch file to add the keys, or to share the settings with build and test environments. But never ever push this file to the source code repository ;)

CAKE: Building solutions with C# & Roslyn

09.07.2016 18:15:00 |

x

CAKE - C# Make

  • A DSL for build tasks (e.g. build following projects, copy stuff, deploy stuff etc.)
  • It’s just C# code that gets compiled via Roslyn
  • Active community, OSS & written in C#
  • You can get CAKE via NuGet
  • Before we begin you might want to check out the actual website of CAKE
  • Cross Platform support

Our goal: Building, running tests, package NuGet Packages etc.

I already did a couple of MSBuild and FAKE related blogposts, so if you are interested on these topics as well go ahead (some are quite old, there is a high chance that some pieces might not apply anymore):

Ok… now back to CAKE.

Let’s start with the basics: Building

I created a pretty simple WPF app and followed these instructions.

The build.cake script

My script is a simplified version of this build script:

// ARGUMENTS
var target = Argument("target", "Default");

// TASKS
Task("Restore-NuGet-Packages")
    .Does(() =>
{
    NuGetRestore("CakeExampleWithWpf.sln");
});

Task("Build")
    .IsDependentOn("Restore-NuGet-Packages")
    .Does(() =>
{
      MSBuild("CakeExampleWithWpf.sln", settings =>
        settings.SetConfiguration("Release"));

});

// TASK TARGETS
Task("Default").IsDependentOn("Build");

// EXECUTION
RunTarget(target);

If you know FAKE or MSBuild, this is more or less the same structure. You define tasks, which may depend on other tasks. At the end you invoke one task and the dependency chain will do its work.

Invoke build.cake

The “build.ps1” will invoke “tools/cake.exe” with the input file “build.cake”.

“build.ps1” is just a helper. This Powershell script will download nuget.exe and download the CAKE NuGet-Package and extract it under a /tools folder. If you don’t have problems with binary files in your source control, you don’t need this Powershell script.

Our first CAKE script!

The output is very well formatted and should explain the mechanics behind it good enough:

Time Elapsed 00:00:02.86
Finished executing task: Build

========================================
Default
========================================
Executing task: Default
Finished executing task: Default

Task                          Duration
--------------------------------------------------
Restore-NuGet-Packages        00:00:00.5192250
Build                         00:00:03.1315658
Default                       00:00:00.0113019
--------------------------------------------------
Total:                        00:00:03.6620927

The first steps are pretty easy and it’s much easier than MSBuild and feels good if you know C#.

The super simple intro code can be found on GitHub.

Re-MVPed

08.07.2016 12:06:00 | Jörg Neumann


Mein MVP-Award für den Bereich “Windows Platform Development” wurde wieder um ein Jahr verlängert. Danke Microsoft! 


How web development changed for me over the last 20 years

07.07.2016 21:00:00 | Jürgen Gutsch

The web changed pretty fast within the last 20 years. More and more logic moves from the server side to the client side. More complex JavaScript needs to be written on the client side. And something freaky things happened the last years: JavaScript was moving to the server and Web technology was moving to the desktop. That is nothing new, but who was thinking about that 20 years ago?

The web changed, but also my technology stack. It seems my stack changed back to the roots. 20 years ago, I started with HTML and JavaScript, moving forward to classic ASP using VBScript. In 2001 I started playing around with ASP.NET and VB.NET and used it in in production until the end of 2006. In 2007 I started writing ASP.NET using C#. HTML and JavaScript was still involved, but more or less wrapped in third party controls and jQuery was an alias for JavaScript that time. All about JavaScript was just jQuery. ASP.NET WebForms felled pretty huge and not really flexible, but it worked. Later - in 2010 - I also did many stuff with SilverLight, WinForms, WPF.

ASP.NET MVC came up and the web stuff starts to feel little more naturally again, than ASP.NET WebForms. From an ASP.NET developer perspective, the web changed back to get better, more clean, more flexible, more lightweight and even more naturally.

But there was something new coming up. Things from outside the ASP.NET world. Strong JavaScript libraries, like KnockOut, Backbone and later on Angular and React. The First Single Page Application frameworks (sorry, I don't wanted to mention the crappy ASP.NET Ajax thing...) came up, and the UI logic moves from the server to the client. (Well, we did a pretty cool SPA back in 2005, but we didn't thought about to create a framework out of it.)

NodeJS change the world again, by using JavaScript on the server. You just need two different languages (HTML and JavaScript) to create cool web applications. I didn't really care about NodeJS, except using it in the back, because some tools are based on it. Maybe that was a mistake, who knows... ;)

Now we got ASP.NET Core, which feels a lot more naturally than the classic ASP.NET MVC.

Naturally in this case means, it feels almost the same as writing classic ASP. It means using the stateless web and working with the stateless web, instead of trying to fix it. Working with the Request and Response more directly, than with the classic ASP.NET MVC and even more than in ASP.NET WebForms. It doesn't mean to write the same unstructured, crappy shit than with classic ASP. ;)

Since we got the pretty cool client side JavaScript frameworks and simplified, minimalistic server side frameworks, the server part was reduced to just serve static files and to serve data over RESTish services.

This is the time where it makes sense to have a deeper look into TypeScript. Until now it didn't makes sense to me. I was writing JavaScript for around 20 years, more and less complex scripts, but I never wrote so much JavaScript within a single project, than as I started using AngularJS last years. Angular2 also was the reason to have a deep look into TypeScript, 'cause now it is completely written in Typescript. And it makes absolutely sense to use it.

A few weeks ago I started the first real NodeJS project. A desktop application which uses NodeJS to provide a high flexible scripting run-time for the users. NodeJS provides the functionality and the UI to the users. All written in TypeScript, instead of plain JavaScript. Why? Because TypeScript has a lot of unexpected benefits:

  • You are still able to write JavaScript ;)
  • It helps you to write small modules and structured code
  • it helps you to write NodeJS compatible modules
  • In general you don't need to write all the JavaScript overhead code for every module
  • You will just focus on the features you need to write

This is why TypeScript got a great benefit to me. Sure a typed language is also useful in many cases, but - working with JS for 20 years - I also like the flexibility of the implicit typed JavaScript and I'm pretty familiar with it. that means, from my perspective the Good thing about TypeScript is, I am still able to write implicit typed code in TypeScript and to use the flexibility of JavaScript. This is why I wrote "You are still able to write JavaScript"

The web technology changed, my technology stack changed and the tooling changed. All the stuff goes more lightweight, even the tools. The console comes back and the IDEs changed back to the roots: Just being text editors with some benefits like syntax highlighting and IntelliSense. Currently I prefer to use the "Swiss army knife" Visual Studio Code or Adobe Brackets, depending on the type of project. Both are starting pretty fast and include nice features.

Using that light weight IDEs is pure fun. Everything is fast, because the machines resource could be used by the apps I need to develop, instead by the IDE I need to use to develop the apps. This makes development a lot faster.

Starting the IDE today means, starting cmder (my favorite console on windows). changing to the project folder, starting a console command to watch the typescript files, to compile after save. Starting another console to use the tools like NPM, gulp, typings, dotnet CLI, NodeJS, and so on. Starting my favorite light weight editor to write some code. :)

Using coroutines to create tutorials in Unity 3D

07.07.2016 01:59:00 | Daniel Springwald

When reading the first time about unitys co-routine concept I thought by myself: How can this be useful?

In the meantime I found out that co-routines are one of the most interesting features of unity 3d.

For my actual game project I use them for several purposes; the most useful is to control interactive level tutorials.

The game contains an advisor avatar:

It guides the player through the level and automatically appears when a milestone is reached and the next hint is needed.

Co-routines are the perfect tool to manage this kind of tutorial.

  1. Create a co-routine and run it when the level starts.
  2. If a initial introduction is needed, this should be the first command inside the co-routine.
  3. Create a WHILE loop which waits till the next milestone event happens and contains a “yield return null;”.
  4. What a milestone is depends on the kind of game you are working on. For my actual game project these are “opening a dialogue”, “selecting a specific object” or “reaching a special place in the level”.
  5. When the condition of the milestone becomes TRUE, the WHILE loop will exit. In my game the next command after the WHILE loop invokes the advisor popup to explain the next step.
  6. Then the next WHILE loop to wait for the next milestone follows – and so on. 

Here is an example how such a tutorial code could look like:

    protected IEnumerator HintPlayback(int moneyToAdd, int itemsToBuy)
    {
        yield return new WaitForSeconds(4);

        ShowMessage("Please look at the pending tasks.");

        while (!this.tasks.AreOpen) yield return null;

        ShowMessage(string.Format("Please add some money - at least {0}$.", moneyToAdd));

        while (!this.money < moneyToAdd) yield return null;

        ShowMessage(string.Format("Perfect. Now please buy at least {0} items.", itemsToBuy));

        while (!this.items.count < itemsToBuy) yield return null;

        ShowMessage("You have completed the tutorial.");

        yield break;
    }

 

You can also skip one ore more hints if milestones are skipped: just check for both conditions (for milestone 1 and 2)  in the WHILE loop of milestone 1.

Writing blog posts using Pretzel

05.07.2016 21:00:00 | Jürgen Gutsch

Until yet I wrote more than 30 blog posts with Pretzel and it works pretty well. From my current perspective it was a good decision, to do this huge change, to move to that pretty cool and lightweight system.

I'm using MarkdownPad 2 to write the posts. Writing goes much easier. The process is now simplified and publishing is almost automated. I also added my blog CSS to that editor to have a nice preview.

The process of writing and publishing new posts goes like this:

  1. Creating a new draft article and save it in the _drafts folder
  2. Working on that draft
  3. Move the finished article to the _posts folder
  4. Commit and push that post to GitHub
  5. Around 30 seconds later the post is published on Azure

This process allows me to write offline in the train, while traveling to the Office in Basel. This is the most important thing to me.

The other big change, was switching to English. I now get more readers and feedback from around the world. Now the most readers are from the US, UK, India and Russia. But also from the other European countries, Australia, Middle East (and Cluj in Romania).

Maybe I lost some readers from the German speaking Area (Germany, Switzerland and Austria) who liked to read my posts in German (I need to find a good translation service to integrate) and I got some more from around the world.

Writing feels good in both, English and in the MarkdownPad :) From my perspective it was a good decision to change the blog system and even the language.

to learn more about Pretzel, have look into my previous post about using pretzel.

How to continuously deploy a ASP.​NET Core 1.0 web app to Microsoft Azure

03.07.2016 21:00:00 | Jürgen Gutsch

We started the first real world project with ASP.NET Core RC2 a month ago and we learned a lot of new stuff around ASP.NET Core

  • Continuous Deployment to an Azure Web App
  • Token based authentication with Angular2
  • Setup Angular2 & TypeScript in a ASP.NET Core project
  • Entity Framework Core setup and initial database seeding

In this post, I'm going to show you how we setup a continuous deployment stuff for a ASP.NET Core 1.0 project, without tackling TypeScript and Angular2. Please Remember: The tooling around .NET Core and ASP.NET Core is still in "preview" and will definitely change until RTM. I'll try to keep this post up-to-date. I wont use the direct deployment to an Azure Web App from a git repository because of some reasons, I [mentioned in a previous post] .

I will write some more lines about the other learned stuff in one of the next posts.

Let's start with the build

Building is the easiest part of the entire deployment process. To build a ASP.NET Core 1.0, solution you are able to use MSBuild.exe. Just pass the solution file to MSBuild and it will build all projects in the solution.

The *.xproj files use specific targets, which will wrap and use the dotnet CLI. You are also able to use the dotnet CLI directly. Just call dotnet build for each project, or just simpler: call dotnet build in the solution folder and the tools will recursively go threw all sub-folders, to look for project.json files and build all the projects in the right build order.

Usually I define an output path to build all the projects into a specific folder. This makes it a lot easier for the next step:

Test the code

Some months ago, I wrote about unit testing DNX libraries (Xunit, NUnit). This didn't really change in .NET Core 1.0. Depending on the Test Framework, a test library could be a console application, which can be called directly. In other cases the test runner is called, which gets the test libraries passed as arguments. We use NUnit to create our unit tests, which doesn't provide a separate runner yet for .NET Core. All of the test libraries are console apps and will build to a .exe file. So we are searching the build output folder for our test libraries and call them one by one. We also pass the test output file name to that libraries, to get detailed test results.

This is pretty much all to run the unit tests.

Throw it to the clouds

Deployment was a little more tricky. But we learned how to do it, from the Visual Studio output. If you do a manual publish with Visual Studio, the output window will tell you how the deployment needs to be done. This are just two steps:

###1. publish to a specific folder using the "dotnet publish" command We are calling dotnet publish with this arguments:

Shell.Exec("dotnet", "publish \"" + webPath + "\" --framework net461 --output \"" + 
    publishFolder + "\" --configuration " + buildConf, ".");
  • webPath contains the path to the web project which needs to be deployed
  • publishFolder is the publish target folder
  • buildConf defines the Debug or Release build (we build with Debug in dev environments)

###2. use msdeploy.exe to publish the complete publish folder to a remote machine. The remote machine in our case, is an instance of an Azure Web App, but could also be any other target machine. msdeploy.exe is not a new tool, but is still working, even with ASP.NET Core 1.0.

So we just need to call msdeploy.exe like this:

Shell.Exec(msdeploy, "-source:contentPath=\"" + publishFolder + "\" -dest:contentPath=" + 
    publishWebName + ",ComputerName=" + computerName + ",UserName=" + username + 
    ",Password=" + publishPassword + ",IncludeAcls='False',AuthType='Basic' -verb:sync -" + 
    "enablerule:AppOffline -enableRule:DoNotDeleteRule -retryAttempts:20",".")
  • msdeploy containes the path to the msdeploy.exe which is usually C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe.
  • publishFolder is the publish target folder from the previous command.
  • publishWebName is the name of the Azure Web App name, which also is the target content path.
  • computername is the name/URL of the remote machine. In our case "https://" + publishWebName + ".scm.azurewebsites.net/msdeploy.axd"
  • username and password are the deployment credentials. the password is hashed, as in the publish profile that you can download from Azure. Just copy paste the hashed password.

conclusion

I didn't mention all the work that needs to be done to prepare the web app. We also use Angular2 with TypeScript. So we also need to get all the NPM dependencies, we need to move the needed files to the wwwroot folder and we need to bundle and to minify all the JavaScript files. This is also done in our build & deployment chain. But in this post, it should be enough to describe just the basic steps for a usual ASP.NET Core 1.0 app.

.NET Core 1.0 RTM and ASP.​NET Core 1.0 RTM was announced

27.06.2016 21:00:00 | Jürgen Gutsch

Finally we get .NET Core 1.0 RTM and ASP.​NET Core 1.0 RTM. Yesterday Microsoft announces the release of .NET Core 1.0 and ASP.​NET Core 1.0.

Scott Hanselman posted a great summery about it: .NET Core 1.0 is now released! You'll find more detailed information about .NET Core 1.0 on the .NET Blog in the Post "Announcing .NET Core 1.0" and pretty much detailed information about ASP.​NET Core 1.0 in the .NET Web Development and Tools Blog in the post "Announcing ASP.NET Core 1.0"

Updating exiting .NET Core RC applications to the RTM, needs some attention. (Not as much as from RC1 to RC2, but there is a little bit to do). First of all: The Visual Studio 2015 Update 3 is needed, as pretty much mentioned in all of the Blog posts. To learn more about the need things to do, Rick Strahl posted a great and pretty detailed post about updating an existing application: Upgrading to ASP.NET Core RTM from RC2

Welche Screen Capture Software verwende ich?

27.06.2016 18:31:54 | Kay Giza

Ich werde von Zeit zu Zeit immer wieder gefragt, welche Tools ich für meine Präsentationen bei Vorträgen oder für meine Blogpostings benutze; so zuletzt auch auf der Developer Week. Dies nehme ich mal als Anlass dies zu skizzieren, insbesondere für alle Tekkis oder Menschen, die täglich dies brauchen könnten. Um es kurz zu machen, ich nutze generell... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

PDF-Download: Microsoft Azure verstehen - ein Leitfaden fuer Entwickler

22.06.2016 11:41:24 | Kay Giza

Das PDF-Dokument ist kostenfrei und ohne Registrierung frei erhältlich. Unter dem Titel 'Azure verstehen - ein Leitfaden für Entwickler' hat Microsoft ein rund 40 Seiten langes deutschsprachiges PDF veröffentlicht. Der Leitfaden beschreibt das Warum und Wie von Microsoft Azure Szenarien... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Visual Studio 2015: Mole Visual Studio Debugger/Visualizer

18.06.2016 12:37:48 | Steffen Steinbrecher

Mole ist ein alternativer Visualisierer für Visual Studio zum detaillierten Inspizieren von .NET-Anwendungen. Während einer Debugging-Session erlauben Visualisierer das Betrachten von UI- und Datenobjekten. Somit kann man sich z.B. den VisualTree einer WPF-Awendung direkt im Debugger anschauen, ohne auf zusätzliche Tools zurückgreifen zu müssen. Darüber hinaus unterstützt Mole das Suchen und Editieren von Eigenschaften und […]

Tipps: Nützliche Community-Projekte aus dem .NET-Umfeld

17.06.2016 17:07:31 | Steffen Steinbrecher

In diesem Beitrag möchte ich mal einige Community-Projekte vorstellen. Gerade in den letzten Jahren ist die Open Source Community rasant gewachsen. Das zeigt sich auch schon an den diversen Plattformen: Angefangen beim Klassiker SourceForge über CodePlex von Microsoft bis hin zu GitHub. Auf jeder einzelnen Plattform sind tausende Projekte gehostet und da kann eine Suche […]

Nachlese zur SharePoint User Group im Martini Club

15.06.2016 10:10:41 | Sebastian Gerling

Gestern hat sich die geneigte SharePoint Community im Martini Club in München versammelt um über SharePoint zu diskutieren. Samuel Zürcher (https://sharepointszu.com/) hat einen interessanten Vortrag zum Thema SharePoint 2016 Hybrid gehalten und hat uns auch an seiner Einschätzung zur Maturity der einzelnen Ansätze teilhaben lassen. Bei Fingerfood haben wir dann das Event entsprechend ausklingen lassen. Ich […]

Regeln | Impressum