.
Anmeldung | Registrieren | Hilfe

Blogger

.NET-Blog Archiv

.NET Developer Blogs

Creating a container component in Angular2

27.09.2016 21:00:00 | Jürgen Gutsch

In one of the last projects, I needed a shared reusable component, which needs to be extended with additional contents or functionality by the view who uses this component. In our case, it was a kind of a menu bar used by multiple views. (View in this case means routing targets.)

Creating such components was easier than expected. I anyway spent almost a whole day to find that solution, I played around with view and template providers, tried to access the template and to manipulate the template. I also tried to create an own structural directive.

But you just need to use the directive in the container component.

<nav>
  <div class="navigation pull-left">
    <ul>
      <!-- the menu items --->
    </ul>
  </div>
  <div class="pull-right">
    <ng-content></ng-content>
  </div>
</nav

That's all. You don't need to write any TypeScript code to get this working. Using this component is now pretty intuitive:

<div class="nav-bar">
  <app-navigation>
    <button (click)="printDraft($event)">print draft</button>
    <button (click)="openPreview($event)">Show preview</button>
  </app-navigation>
</div>

The contents of the - the buttons - will now be placed to the place holder.

After spending almost a whole day to get this working my first question was: Is it really that easy? Yes it is. That's all.

Maybe you knew about it. But I wasn't able to find any hint in the docs, on StackOverflow or in any Blog about it. Maybe this requirement isn't used needed often. At least I stumbled upon a documentation where ng-content as used and I decided to write about it. Hope it will help someone else. :)

Material von der BASTA! 2016

26.09.2016 12:03:00 | Jörg Neumann

Hier das Material meiner Sessions von der BASTA! 2016:



Das Material vom BASTA! Lab und dem Xamarin-Workshop erhalten Sie gerne auf Anfrage.

Material von der MobileTechCon 2016

26.09.2016 11:58:00 | Jörg Neumann

Authentication in ASP.​NET Core for your Web API and Angular2

21.09.2016 21:00:00 | Jürgen Gutsch

Authentication in a single page application is a bit more special, if you just know the traditional ASP.NET way. To imagine that the app is a completely independent app like a mobile app helps. Token based authentication is the best solution for this kind of apps. In this post I'm going to try to describe a high level overview and to show a simple solution.

Intro

As written in my last posts about Angular2 and ASP.NET Core, I reduced ASP.NET Core to just a HTTP Service, to provide JSON based data to an Angular2 client. Some of my readers, asked me about how the Authentication is done in that case. I don't use any server generated log-in page, registration page or something like this. So the ASP.NET Core part only provides the web API and the static files for the client application.

There are many ways to protect your application out there. The simplest one is using an Azure Active Directory. You could also setup a separate authentication server, using IdentityServer4, to manage the users, roles and to provide a token based authentication.

And that's the key word: A Token Based Authentication is the solution for that case.

With the token bases authentication, the client (the web client, the mobile app, and so on) gets a string based encrypted token after a successful log-in. The token also contains some user info and an info about how long the token will be valid. This token needs to be stored on the client side and needs to be submitted to the server every time you request a resource. Usually you use a HTTP header to submit that token. If the token is not longer valid you need to perform a new log-in.

In one of our smaller projects, didn't set-up a different authentication server and we didn't use Azure AD, because we needed a fast and cheap solution. Cheap from the customers perspective.

The Angular2 part

On the client side we used angular2-jwt, which is a Angular2 module that handles authentication tokens. It checks the validity, reads meta information out of it and so on. It also provides a wrapper around the Angular2 HTTP service. With this wrapper you are able to automatically pass that token via a HTTP header back to the server on every single request.

The work flow is like this.

  1. If the token is not valid or doesn't exist on the client, the user gets redirected to the log-in route
  2. The user enters his credentials and presses the log-in button
  3. The date gets posted to the server where a special middle-ware handles that request
    1. The user gets authenticated on the server side
    2. The token, including a validation date and some meta date, gets created
    3. The token gets returned back to the client
  4. the client stores the token in the local storage, cookie or whatever, to use it on every new request.

The angular2-jwt does the most magic on the client for us. We just need to use it, to check the availability and the validity, every time we want to do a request to the server or every time we change the view.

This is a small example (copied from the Github readme) about how the HTTP wrapper is used in Angular2:

import { AuthHttp, AuthConfig, AUTH_PROVIDERS } from 'angular2-jwt';

...

class App {

  thing: string;

  constructor(public authHttp: AuthHttp) {}

  getThing() {
    // this uses authHttp, instead of http
    this.authHttp.get('http://example.com/api/thing')
      .subscribe(
        data => this.thing = data,
        err => console.log(err),
        () => console.log('Request Complete')
      );
  }
}

More samples and details can be found directly on github https://github.com/auth0/angular2-jwt/ and there is also a detailed blog post about using angular2-jwt: https://auth0.com/blog/introducing-angular2-jwt-a-library-for-angular2-authentication/

The ASP.NET part

On the server side we also use a, separate open source project, called SimpleTokenProvider. This is really a pretty simple solution to authenticate the users, using his credentials and to create and provide the token. I would not recommend to use this way in a huge and critical solution, in that case you should choose the IdentiyServer or any other authentication like Azure AD to be more secure. The sources of that project need to be copied into your project and you possibly need to change some lines e. g. to authenticate the users against your database, or whatever you use to store the user data. This project provides a middle-ware, which is listening on a defined path, like /api/tokenauth/. This URL is called with a POST request by the log-in view of the client application.

The authentication for the web API, is just using the token, sent with the current request. This is simply done with the built-in IdentiyMiddleware. That means, if ASP.NET MVC gets a request to a Controller or an Action with an AuthorizeAttribute, it checks the request for incoming Tokens. If the Token is valid, the user is authenticated. If the user is also in the right role, he gets authorized.

We put the users role information as additional claims into the Token, so this information can be extracted out of that token and can be used in the application.

To find the users and to identify the user, we use the given UserManager and SignInManager. These managers are bound to the IdentityDataContext. This classes are already available, when you create a new project with Identiy in Visual Studio.

This way we can authenticate a user on the server side:

public async Task<ClaimsIdentity> GetIdentity(string email, string password)
{
    var result = await _signInManager.PasswordSignInAsync(email, password, false, lockoutOnFailure: false);
    if (result.Succeeded)
    {
        var user = await _userManager.FindByEmailAsync(email);
        var claims = await _userManager.GetClaimsAsync(user);

        return new ClaimsIdentity(new GenericIdentity(email, "Token"), claims);
    }

    // Credentials are invalid, or account doesn't exist
    return null;
}

And this claims will be used to create the Jwt-Token in the TokenAuthentication middle-ware:

var username = context.Request.Form["username"];
var password = context.Request.Form["password"];

var identity = await identityResolver.GetIdentity(username, password);
if (identity == null)
{
    context.Response.StatusCode = 400;
    await context.Response.WriteAsync("Unknown username or password.");
    return;
}

var now = DateTime.UtcNow;

// Specifically add the jti (nonce), iat (issued timestamp), and sub (subject/user) claims.
// You can add other claims here, if you want:
var claims = new[]
{
    new Claim(JwtRegisteredClaimNames.Sub, username),
    new Claim(JwtRegisteredClaimNames.Jti, await _options.NonceGenerator()),
    new Claim(JwtRegisteredClaimNames.Iat, ToUnixEpochDate(now).ToString(), ClaimValueTypes.Integer64)
};

// Create the JWT and write it to a string
var jwt = new JwtSecurityToken(
    issuer: _options.Issuer,
    audience: _options.Audience,
    claims: claims,
    notBefore: now,
    expires: now.Add(_options.Expiration),
    signingCredentials: _options.SigningCredentials);
var encodedJwt = new JwtSecurityTokenHandler().WriteToken(jwt);

var response = new
{
    access_token = encodedJwt,
    expires_in = (int)_options.Expiration.TotalSeconds,
    admin = identity.IsAdministrator(),
    fullname = identity.FullName(),
    username = identity.Name
};

// Serialize and return the response
context.Response.ContentType = "application/json";
await context.Response.WriteAsync(JsonConvert.SerializeObject(response, _serializerSettings));

This code will not work, if you copy and past it in your application, but shows you what needs to be done to create a token and how the token is created and sent to the client. Nate Barbattini wrote a detailed article about how this SimpleTokenProvider is working and how it needs to bes used in his Blog: https://stormpath.com/blog/token-authentication-asp-net-core

Conclusion

This is jsut a small overview. If you want to learn more and detailed information about how ASP.NET Identity works, you should definetly subscribe to the blogs of Dominick Baier and Brock Allen. Even the ASP.NET Docs are good resources to learn more about the ASP.NET Security.

Update: Just a few hours ago Scott Brady wrote an blog post about getting Started with IdentityServer 4

ASP.​NET Core and Angular2 using dotnet CLI and Visual Studio Code

18.09.2016 21:00:00 | Jürgen Gutsch

This is another post about ASP.NET Core and Angular2. This time I use a cleaner and more light weight way to host a Angular2 App inside an ASP.NET Core Web. I'm going to use dotnet CLI and Visual Studio Code.

A few days ago there was an update for ASP.NET Core announced. This is not a big one, but a important run-time update. You should install it, if you already use ASP.NET Core 1.0. If you install it the first time (loaded from http://get.asp.net/), the update is already included. Also since a few days, the final version of Angular2 was announced. So, we will use Angular 2.0.0 and ASP.NET Core 1.0.1.

This post is structured into nine steps:

#1 Create the ASP.NET Core web

The first step is to create the ASP.NET Core web application this is the easiest thing using the dotnet CLI. After downloading it from http://get.asp.net and installing it, you are directly able to use it. Choose any console you like and g to your working folder.

Type the following line to create a new web application inside that working folder:

> dotnet new -t web

If you used the dotnet CLI for the first time it will take a few seconds. After the first time it is pretty fast.

Now you have a complete ASP.NET Core quick-start application. Almost the same thing you get, if you create a new application in Visual Studio 2015.

Now we need to restore the NuGet packages, which contains all the .NET Core and ASP.NET dependencies

> dotnet restore

This takes a few seconds, depending in the amount of packages or on the internet connection.

If this is done, type dotnet run to start the app. You will see an URL in the console. Copy this URL and paste it into the browsers address bar. As you can see, you just need three console commands to create a working ASP.NET application.

#2 Setup the ASP.NET Core web

To support a Angular2 single page application we need to prepare the Startup.cs a little bit. Because we don't want to use MVC, but just the web API, we need to remove the configured default route.

To support Angular routing, we need to handle 404 errors: In case a requested resource was not found on the server, it could be a Angular route. This means we should redirect request, which results in a error 404, to the index.html. We need to create this file in the wwwroot folder later on.

The Configure method in the Startup.cs now looks like this:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
        app.UseDatabaseErrorPage();
        app.UseBrowserLink();
    }
    else
    {
        app.UseExceptionHandler("/Home/Error");
    }

    app.Use(async (context, next) =>
    {
        await next();

        if (context.Response.StatusCode == 404
            && !Path.HasExtension(context.Request.Path.Value))
        {
            context.Request.Path = "/index.html";
            await next();
        }
    });

    app.UseStaticFiles();

    app.UseIdentity();

    app.UseMvc();
}

#3 The front-end dependencies

To develop the front-end with Angular 2, we need some tools, like TypeScript, Webpack and NPM. We use TypeScript to write the client code, which will be transpiled to JavaScript using Webpack. We use Webpack with a simple Webpack configuration to transpile the TypeScript code to JavaScript and to copy the dependencies to the wwwroot folder.

NPM is used to get all that stuff, including Angular itself, to the development machine. We need to configure the package.json a little bit. The easiest way is to use the same configuration as in the ANgular2 quick-start tutorial on angular.io

You need to have Node.JS installed on your machine, To get all the tools working.

{
  "name": "webapplication",
  "version": "0.0.0",
  "private": true,
  "scripts": {
    "start": "tsc && concurrently \"npm run tsc:w\" \"npm run lite\" ",
    "lite": "lite-server",
    "postinstall": "typings install",
    "tsc": "tsc",
    "tsc:w": "tsc -w",
    "typings": "typings"
  },
  "dependencies": {
    "@angular/common": "2.0.0",
    "@angular/compiler": "2.0.0",
    "@angular/core": "2.0.0",
    "@angular/forms": "2.0.0",
    "@angular/http": "2.0.0",
    "@angular/platform-browser": "2.0.0",
    "@angular/platform-browser-dynamic": "2.0.0",
    "@angular/router": "3.0.0",
    "@angular/upgrade": "2.0.0",

    "core-js": "2.4.1",
    "reflect-metadata": "0.1.3",
    "rxjs": "5.0.0-beta.12",
    "systemjs": "0.19.27",
    "zone.js": "0.6.21",
    
    "bootstrap": "3.3.6"
  },
  "devDependencies": {
    "ts-loader": "0.8.2",
    "ts-node": "0.5.5",
    "typescript": "1.8.10",
    "typings": "1.3.2",
    "webpack": "1.13.2"
  }
}

You should also install Webpack, Typings and TypeScript globaly on your machine:

> npm install -g typescript
> npm install -g typings
> npm install -g webpack

The TypeScript build needs a configuration, to know how to build that code. This is why we need a tsconfig.json:

{
  "compilerOptions": {
    "target": "es5",
    "module": "commonjs",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": false,
    "noImplicitAny": false
  }
}

And TypeScript needs type defintions for all the used libraries, which are not written in TypeScript. This is where Typings is used. Typings is a kind of a package manager for TypeScript type definitions, which also needs a configuration:

{
  "globalDependencies": {
    "core-js": "registry:dt/core-js#0.0.0+20160725163759",
    "jasmine": "registry:dt/jasmine#2.2.0+20160621224255",
    "node": "registry:dt/node#6.0.0+20160909174046"
  }
}

Now we can use npm install in the console to load all that stuff. This command automatically calls typings install as a NPM post install event.

#4 Setup the single page

The Angular2 app is hosted on a single HTML page inside the wwwroot folder of the ASP.NET Core web. Add a new index.html and add it to the wwwroot folder:

<html>
    <head>
        <title>Angular 2 QuickStart</title>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <link rel="stylesheet"http://feedproxy.google.com href="css/site.css">
        <!-- 1. Load libraries -->
        <scripthttp://feedproxy.google.com src="js/core.js"></script>
        <scripthttp://feedproxy.google.com src="js/zone.js"></script>
        <scripthttp://feedproxy.google.com src="js/reflect.js"></script>
        <scripthttp://feedproxy.google.com src="js/system.js"></script>
        <!-- 2. Configure SystemJS -->
        <scripthttp://feedproxy.google.com src="systemjs.config.js"></script>
        <script>
          System.import('app').catch(function(err){ console.error(err); });
        </script>
    </head>
    <!-- 3. Display the application -->
    <body>
        <my-app>Loading...</my-app>
    </body>
</html>

Currently we don't have the JavaSript dependencies configured. This is what we will do in the next step

#5 configure webpack

Webpack has two tasks in this simple tutorial. The first thing is to copy some dependencies out of the node_modules folder into the wwwroot folder, because static files will only be provided out of this special folder. We need Core.JS, Zone.JS, Reflect-Metadata and System.JS. The second task is to build and bundle the Angular2 application (which is not yet written) and all it's dependencies.

Let's see how this simple Webpack configuration (webpack.config.js) looks like:

module.exports = [
  {
    entry: {
      core: './node_modules/core-js/client/shim.min.js',
      zone: './node_modules/zone.js/dist/zone.js',
      reflect: './node_modules/reflect-metadata/Reflect.js',
      system: './node_modules/systemjs/dist/system.src.js'
    },
    output: {
      filename: './wwwroot/js/[name].js'
    },
    target: 'web',
    node: {
      fs: "empty"
    }
  },
  {
    entry: {
      app: './wwwroot/app/main.ts'
    },
    output: {
      filename: './wwwroot/app/bundle.js'
    },
    devtool: 'source-map',
    resolve: {
      extensions: ['', '.webpack.js', '.web.js', '.ts', '.js']
    },
    module: {
      loaders: [
        { test: /\.ts$/, loader: 'ts-loader' }
      ]
    }
  }];

We have two separate configurations for the mentioned tasks. This is not the best way how to configure Webpack. E.g. the Angular2 Webpack Starter or the latest Angular CLI, do the whole stuff with a more complex Webpack configuration.

To run this configuration, just type webpack in the console. The first configuration writes out a few warnings, but will work anyway. The second config should fail, because we don't have the Angular2 app yet.

#6 Configure the App

We now need to load the Angular2 app and it's dependencies. This is done with System.JS which also needs a ocnfiguration. We need a systemjs.config.js:

/**
 * System configuration for Angular 2 samples
 * Adjust as necessary for your application needs.
 */
(function (global) {
    System.config({
        paths: {
            // paths serve as alias
            'npm:': '../node_modules/'
        },
        // map tells the System loader where to look for things
        map: {
            // our app is within the app folder
            app: 'app',
            // angular bundles
            '@angular/core': 'npm:@angular/core/bundles/core.umd.js',
            '@angular/common': 'npm:@angular/common/bundles/common.umd.js',
            '@angular/compiler': 'npm:@angular/compiler/bundles/compiler.umd.js',
            '@angular/platform-browser': 'npm:@angular/platform-browser/bundles/platform-browser.umd.js',
            '@angular/platform-browser-dynamic': 'npm:@angular/platform-browser-dynamic/bundles/platform-browser-dynamic.umd.js',
            '@angular/http': 'npm:@angular/http/bundles/http.umd.js',
            '@angular/router': 'npm:@angular/router/bundles/router.umd.js',
            '@angular/forms': 'npm:@angular/forms/bundles/forms.umd.js',
            // other libraries
            'rxjs': 'npm:rxjs',
        },
        meta: {
            './app/bundle.js': {
                format: 'global'
            }
        },
        // packages tells the System loader how to load when no filename and/or no extension
        packages: {
            app: {
                main: './bundle.js',
                defaultExtension: 'js'
            },
            rxjs: {
                defaultExtension: 'js'
            }
        }
    });
})(this);

This file is almost equal to the file from the angular.io quick-start tutorial. We just need to change a few things:

The first thing is the path to the node_modules which is not on the same level as usual. So we need to change the path to ../node_modules/, we also need to tell System.js that the bundle is not a commonjs module. this is doen with the meta property. I also changed the app main path to ./bundle.js, instead of main.js

#7 Create the app

Inside the wwwroot folder, create a new folder called app. Inside this new folder we need to create a first TypeScript file called main.ts:

import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app.module';

const platform = platformBrowserDynamic();
platform.bootstrapModule(AppModule);

This script calls the app.module.ts, which is the entry point to the app:

import { NgModule } from '@angular/core';
import { HttpModule } from '@angular/http';
import { FormsModule } from '@angular/forms';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { PersonService } from './person.service';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        HttpModule],
    declarations: [AppComponent],
    providers: [
        PersonService,
    ],
    bootstrap: [AppComponent]
})
export class AppModule { }

The module collects all the parts of our app and puts all the components and services together.

This is a small component with a small inline template:

import { Component, OnInit } from '@angular/core';
import { PersonService, Person } from './person.service';

@Component({
    selector: 'my-app',
    template: `
    <h1>My First Angular 2 App</h1>
    <ul>
    <li *ngFor="let person of persons">
    <strong></strong><br>
    from: <br>
    date of birth: 
    </li>
    </ul>
    `,
    providers: [
        PersonService
    ]
})
export class AppComponent extends OnInit {

    constructor(private _service: PersonService) {
        super();
    }

    ngOnInit() {
        this._service.loadData().then(data => {
            this.persons = data;
        })
    }

    persons: Person[] = [];
}

At least, we need to create a service which calls a ASP.NET Core web api. We need to create the API later on.

import { Injectable } from '@angular/core';
import { Http, Response } from '@angular/http';
import { Observable } from 'rxjs/Rx';
import 'rxjs/add/operator/toPromise';

@Injectable()
export class PersonService {
    constructor(private _http: Http) { }

    loadData(): Promise<Person[]> {
        return this._http.get('/api/persons')
            .toPromise()
            .then(response => this.extractArray(response))
            .catch(this.handleErrorPromise);
    }    

    protected extractArray(res: Response, showprogress: boolean = true) {
        let data = res.json();
        return data || [];
    }

    protected handleErrorPromise(error: any): Promise<void> {
        try {
            error = JSON.parse(error._body);
        } catch (e) {
        }

        let errMsg = error.errorMessage
            ? error.errorMessage
            : error.message
                ? error.message
                : error._body
                    ? error._body
                    : error.status
                        ? `${error.status} - ${error.statusText}`
                        : 'unknown server error';

        console.error(errMsg);
        return Promise.reject(errMsg);
    }
}
export interface Person {
    name: string;
    city: string;
    dob: Date;
}

#8 The web API

The web api is pretty simple in this demo, just to show how it works:

using System;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

namespace demo
{
    [Route("api/persons")]
    [ResponseCache(Location = ResponseCacheLocation.None, NoStore = true, Duration = -1)]
    public class PersonsController : Controller
    {
        [HttpGet]
        public IEnumerable<Person> GetPersons()
        {
            return new List<Person>
            {
                new Person{Name = "Max Musterman", City="Naustadt", Dob=new DateTime(1978, 07, 29)},
                new Person{Name = "Maria Musterfrau", City="London", Dob=new DateTime(1979, 08, 30)},
                new Person{Name = "John Doe", City="Los Angeles", Dob=new DateTime(1980, 09, 01)}
            };
        }
    }

    public class Person
    {
        public string Name { get; set; }
        public string City { get; set; }
        public DateTime Dob { get; set; }
    }

}

If you start the app using dotnet run you can call the API using that URL: http://localhost:5000/api/persons/, you'll see the three persons in the browser as a JSON result.

#9 That's it. Run the app.

Type webpack and dotnet run in the console to compile and pack the client app and to start the application. After that call the URL http://localhost:5000/ in a browser:

Conclusion

As you can see, hosting an Angular2 app inside ASP.NET Core web using this way is pretty much easier and much more light weight than using Visual Studio 2015.

Aniway, this is the last post about combining this two technologies. Because this is just a good way, if you write a small application. For bigger applications you should separate the client application from the server part. The Angular2 app should be written using Anngular CLI. Working like this both parts are completely independent and it is much easier to setup and to deploy.

I pushed the demo code to GitHub. Try it out, play around with it and give mes some feedback about it :)

Building a home robot: Part 2 - Neck design and movement

01.09.2016 20:00:00 | Daniel Springwald

(see all parts of "building a home robot")

My first approach to move the neck and head was to use servos. Because the rotation has to move a large weight to rotate the head left/right, I bought a servo driven ball-bearing base.

The first try was a servo driven ball-bearing base.



Mechanical everything worked fine – the base had : enough power solve the movement.
But it was very loud. It made the typical plastic server “scrieeeee” sound and the base amplified the sound by acoustic resonance.
It all sounded like a big cheap RC toy from the 80s :-/ Nothing you want to hear from a futuristic robot with a cute and modern body design.

The next idea was to create laser cut gears on my own and mount them on a big ball-bearing:

As engine I wanted to use a stepper motor

Everything seems fine – even a first manual test moving the motor and gears by hand.
But the first electronic driven test was a little bit annoying: It was much more quiet and strong than the RC servo but still loud.

The next idea to solve this was to use a fan belt to connect the gears to prevent the plastic-on-plastic sound. But for this the complete construction would have to be changed.

So I tried another idea: Creating the motor gear  with 3d printing from rubber instead of laser cut acryl:

This worked very fine and now the gears are working very silent.

The endstop to home the rotation:

Microsofts PowerShell ist jetzt OpenSource und für Linux und OS X verfügbar

22.08.2016 10:05:52 | Steffen Steinbrecher

Die auf dem .NET-Framework basierende PowerShell wurde jetzt von Microsoft als OpenSource-Projekt freigegeben. Das GitHub Repository ist hier zu finden: https://github.com/PowerShell/PowerShell Darüber hinaus steht die PowerShell jetzt plattformübergreifend für OS X und Linux zur Verfügung. Die Downloads für die unterschiedlichen Plattformen sind über das GitHub-Repository erhältlich. Die PowerShell verbindet die aus Unix-Shells bekannte Philosophie von […]

C#: Exception Handling bei asynchronen Methoden (async/await)

15.08.2016 18:05:23 | Steffen Steinbrecher

Bei der Verwendung von async/await können Methoden drei unterschiedliche Rückgabetypen besitzen: Task, Task<T> oder void. In den Best Practices wird jetzt immer geschrieben: „Vermeide den Rückgabetyp void und gib immer ein Task-Objekt zurück!“ Doch warum ist das so? Das soll jetzt im nachfolgenden Artikel etwas genauer erläutert werden. Wie in der Einleitung schon geschrieben können […]

TFS 2015: Adding a new Windows Build Agent

11.08.2016 03:45:00 |

The TFS 2015 Build System

The build system before TFS 2015 was based on a pretty arcane XAML workflow engine which was manageable, but not fun to use. With TFS 2015 a new build system was implemented, which behave pretty much the same way as other build systems (e.g. TeamCity or AppVeyor).

The “build workflow” is based on a simple “task”-concept.

There are many related topics in the TFS world, e.g. Release-Management, but this blogpost will just focus on the “Getting the system ready”-part.

TFS Build Agents

Like the other parts of Microsoft the TFS is now also in the cross-platform business. The build system in TFS 2015 is capable of building a huge range of languages. All you need is a compatible build agent.

My (simple) goal was to build a .NET application on a Windows build agent via the new TFS 2015 build system.

Step 1: Adding a new build agent

Important - Download Agent.

This one is maybe the hardest part. Instead of a huge TFS-Agent-Installer.msi you need to navigate inside the TFS control panel to the “Agent pool”-tab.

You need at least one pool and need to click the “Download Agent” button.

Step 2: Configure the agent

Configuration.

The .zip package contains the actual build agent executable and a .cmd file.

Invoke the “ConfigureAgent.cmd”-file:

We run those agents as Windows Service (which was one of the last config-questions) and are pretty happy with the system.

Step 3: You are done

Now your new build agent should appear under the given build agent pool:

TFS Build Agents.

After googleing around I also found the corresponding TFS HowTo, which describes more or less the complete setup. Well… now it is documented on MSDN and this blog. Maybe this will help my future-self ;)

Setup Angular2 & TypeScript in a ASP.​NET Core project using Visual Studio

07.08.2016 21:00:00 | Jürgen Gutsch

In this post I try to explain, how to setup a ASP.NET Core project with Angular2 and typescript in Visual Studio 2015.

There are two ways to setup an Angular2 Application: The most preferred way is to use angular-cli, which is pretty simple. Unfortunately the Angular CLI doesn't use the latest version . The other way is to follow the tutorial on angular.io, which sets-up a basic starting point, but this needs a lot of manually steps. There are also two ways to setup the way you want to develop your app with ASP.NET Core: One way is to separate the client app completely from the server part. It is pretty useful to decouple the server and the client, to create almost independent applications and to host it on different machines. The other way is to host the client app inside the server app. This is useful for small applications, to have all that stuff in one place and it is easy to deploy on a single server.

In this post I'm going to show you, how you can setup Angular2 app, which will be hosted inside an ASP.NET Core application using Visual Studio 2015. Using this way, the Angular-CLI is not the right choice, because it already sets up a development environment for you and all that stuff is configured a little bit different. The effort to move this to Visual Studio would be to much. I will almost follow the tutorial on http://angular.io/. But we need to change some small things to get that stuff working in Visual Studio 2015.

Configure the ASP.NET Core project

Let's start with a new ASP.NET Core project based on .NET Core. (The name doesn't matter, so "WebApplication391" is fine). We need to choose a Web API project, because the client side Angular2 App will probably communicate with that API and we don't need all the predefined MVC stuff.

A Web API project can't serve static files like JavaScripts, CSS styles, images, or even HTML files. Therefore we need to add a reference to Microsoft.AspNetCore.StaticFiles in the project.json:

"Microsoft.AspNetCore.StaticFiles": "1.0.0 ",

And in the startup.cs, we need to add the following line, just before the call of `UseMvc()

app.UseStaticFiles();

Another important thing we need to do in the startup.cs, is to support the Routing of Angular2. If the Browser calls a URL which doesn't exists on the server, it could be a Angular route. Especially if the URL doesn't contain a file extension.

This means we need to handle the 404 error, which will occur in such cases. We need to serve the index.html to the client, if there was an 404 error, on requests without extensions. To do this we just need a simple lambda based MiddleWare, just before we call UseStaticFiles():

app.Use(async (context, next) =>
{
    await next();

    if (context.Response.StatusCode == 404
        && !Path.HasExtension(context.Request.Path.Value))
    {
        context.Request.Path = "/index.html";
        await next();
    }
});

Inside the properties folder we'll find a file called launchSettings.json. This file is used to configure the behavior of visual Studio 2015, when we press F5 to run the application. Remove all strings "api/values" from this file. Because Visual Studio will always call that specific Web API, every time you press F5.

Now we prepared the ASP.NET Core application to start to follow the angular.io tutorial.:

Let's start with the NodeJS packages. Using Visual Studio we can create a new "npm Configuration file" called package.json. Just copy the stuff from the tutorial:

{
  "name": "dotnetpro-ecollector",
  "version": "1.0.0",
  "scripts": {
    "start": "tsc && concurrently \"npm run tsc:w\" \"npm run lite\" ",
    "lite": "lite-server",
    "postinstall": "typings install && gulp restore",
    "tsc": "tsc",
    "tsc:w": "tsc -w",
    "typings": "typings"
  },
  "license": "ISC",
  "dependencies": {
    "@angular/common": "2.0.0-rc.4",
    "@angular/compiler": "2.0.0-rc.4",
    "@angular/core": "2.0.0-rc.4",
    "@angular/forms": "0.2.0",
    "@angular/http": "2.0.0-rc.4",
    "@angular/platform-browser": "2.0.0-rc.4",
    "@angular/platform-browser-dynamic": "2.0.0-rc.4",
    "@angular/router": "3.0.0-beta.1",
    "@angular/router-deprecated": "2.0.0-rc.2",
    "@angular/upgrade": "2.0.0-rc.4",
    "systemjs": "0.19.27",
    "core-js": "^2.4.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "zone.js": "^0.6.12",
    "angular2-in-memory-web-api": "0.0.14",
    "es6-promise": "^3.1.2",
    "es6-shim": "^0.35.0",
    "jquery": "^2.2.4",
    "bootstrap": "^3.3.6"
  },
  "devDependencies": {
    "gulp": "^3.9.1",
    "concurrently": "^2.0.0",
    "lite-server": "^2.2.0",
    "typescript": "^1.8.10",
    "typings": "^1.0.4"
  }
}

In this listing, I changed a few things:

  • I added "&& gulp restore" to the postinstall script
  • I also added Gulp to the devDependency to typings

After the file is saved, Visual Studio tryies to load all the packages. This works, but VS shows a yellow exclemation mark because of any arror. Until yet, I didn't figure out what is going wrong here. To be sure all packages are propery installed, use the console, change directory to the current project and type npm install

The post install will possibly faile because gulp is not yet configured. We need gulp to copy the dependencies to the right location inside the wwwroot folder, because static files will only be loaded from that location. This is not part of the tutorial on angular.io, but is needed to fit the client stuff into Visual Studio. Using Visual Studio we need to create a new "gulp Configuration file" with the name gulpfile.js:

var gulp = require('gulp');

gulp.task('default', function () {
    // place code for your default task here
});

gulp.task('restore', function() {
    gulp.src([
        'node_modules/@angular/**/*.js',
        'node_modules/angular2-in-memory-web-api/*.js',
        'node_modules/rxjs/**/*.js',
        'node_modules/systemjs/dist/*.js',
        'node_modules/zone.js/dist/*.js',
        'node_modules/core-js/client/*.js',
        'node_modules/reflect-metadata/reflect.js',
        'node_modules/jquery/dist/*.js',
        'node_modules/bootstrap/dist/**/*.*'
    ]).pipe(gulp.dest('./wwwroot/libs'));
});

The task restore, copies all the needed files to the Folder ./wwwroot/libs

TypeScript needs some type definitions to get the types and API definitions of the libraries, which are not written in TypeScript or not available in TypeScript. To load this, we use another tool, called "typings". This is already installed with NPM. This tool is a package manager for type definition files. We need to configure this tool with a typings.config

{
  "globalDependencies": {
    "es6-shim": "registry:dt/es6-shim#0.31.2+20160317120654",
    "jquery": "registry:dt/jquery#1.10.0+20160417213236"
  }
}

No we have to configure typescript itself. We can also add a new item, using Visual Studio to create a TyoeScript configuration file. I would suggest not to use the default content, but the contents from the angular.io tutorial.

{
  "compileOnSave": true,
  "compilerOptions": {
    "target": "es5",
    "module": "commonjs",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": false,
    "noImplicitAny": false
  },
  "exclude": [
    "node_modules"
  ]
}

The only things I did with this file, is to add the "compileOnSave" flag and to exclude the "node_modules" folder from the TypeScript build, because we don't need to build containing the TypeScript files and because we moved the needed JavaScripts to ./wwwroot/libs.

If you use Git or any other source code repository, you should ignore the files generated out of our TypeScript files. In case of Git, I simply add another .gitignore to the ./wwwroot/app folder

#remove generated files
*.js
*.map

We do this becasue the JavaScript files are only relevant to run the applicaiton and should be created automatically in the development environment or on a build server, befor deploying the app.

The first app

That is all to prepare a ASP.NET Core project in Visual Studio 2015. Let's start to create the Angular app. The first step is to create a index.html in the folder wwwroot:

<html>
<head>
    <title>dotnetpro eCollector</title>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet"http://feedproxy.google.com href="css/styles.css">
    <!-- 1. Load libraries -->
    <!-- Polyfill(s) for older browsers -->
    <scripthttp://feedproxy.google.com src="libs/shim.min.js"></script>
    <scripthttp://feedproxy.google.com src="libs/zone.js"></script>
    <scripthttp://feedproxy.google.com src="libs/Reflect.js"></script>
    <scripthttp://feedproxy.google.com src="libs/system.src.js"></script>
    <!-- 2. Configure SystemJS -->
    <scripthttp://feedproxy.google.com src="systemjs.config.js"></script>
    <script>
        System.import('app')
            .catch(function (err) { console.error(err); });
    </script>
</head>
<!-- 3. Display the application -->
<body>
    <my-app>Loading...</my-app>
</body>
</html>

As you can see, we load almost all JavaScript files from the libs folder. Except a systemjs.config.js. This file is needed to configure Angular2, to define which module is needed, where to find dependencies an so on. Create a new JavaScript file, call it systemjs.config.js and paste the following content into it:

(function (global) {

    // map tells the System loader where to look for things
    var map = {
        'app': 'app', 
        'rxjs': 'lib/rxjs',
        '@angular': 'lib/@angular'
    };

    // packages tells the System loader how to load when no filename and/or no extension
    var packages = {
        'app': { main: 'main.js', defaultExtension: 'js' },
        'rxjs': { defaultExtension: 'js' },
        'angular2-in-memory-web-api': { defaultExtension: 'js' },
    };

    var packageNames = [
      '@angular/common',
      '@angular/compiler',
      '@angular/core',
      '@angular/http',
      '@angular/platform-browser',
      '@angular/platform-browser-dynamic',
      '@angular/router',
      '@angular/router-deprecated',
      '@angular/upgrade'
    ];

    packageNames.forEach(function (pkgName) {
        packages[pkgName] = { main: 'index.js', defaultExtension: 'js' };
    });

    var config = {
        map: map,
        packages: packages
    }

    // filterSystemConfig - index.html's chance to modify config before we register it.
    if (global.filterSystemConfig) { global.filterSystemConfig(config); }

    System.config(config);

})(this);

This file also defines a main entry point which is a main.js. This file is the transpiled TypeScript file main.ts we need to create in the next step. The main.ts bootstraps our Angular2 app:

import { bootstrap } from '@angular/platform-browser-dynamic';
import { AppComponent } from './app.component';

bootstrap(AppComponent);

We also need to create our first Angular2 component. Create a TypeScript file with the name app.component.ts inside the app folder:

import { Component } from '@angular/core';

@Component({
  selector: 'my-app',
  template: '<h1>My first Angular App in Visual Studio</h1>'
})
export class AppComponent { }

If all works fine, Visual Studio should have created a JavaScript file for each TypeScript file. Also the build should run. Pressing F5 should start the Application and a Browser should open.

A short moment the Loading... is visible in the browser. After the app is initialized and all the Angular2 magic happened, you'll see the contents of the template defined in the app.component.ts.

Conclusion

I propose to use VisualStudio just for small single page applications, because it gets slower the more dynamic files need to be handled. ASP.NET Core is pretty cool to handle dynamically generated files, but Visual Studio still is not. VS tries to track and manage all the files inside the project, which slows down a lot. One solution is to disable source control in Visual Studio and use an external tool to manage the sources.

Another - even better - solution is not to use Visual Studio for front-end development. In a new project, I propose to separate front-end and back-end development and to use Visual Studio Code for the front-end development or even both. You need to learn a few things about NPM, Gulp and you need to use a console in this case, but web development will be a lot faster and a lot more lightweight with this approach. In one of the next posts, I'll show how I currently work with Angular2.

.NET Framework 4.6.2 erschienen

06.08.2016 18:51:17 | Steffen Steinbrecher

Das .NET Framework ist in der Version 4.6.2 erschienen. Es gibt Neuerungen in den folgenden Bereichen: Base Class Library Common Language Runtime ClickOnce ASP.NET SQL Windows Presentation Foundation Windows Communication Foundation Hier mal ein paar neue Features: Long Path Support (es werden jetzt Pfade mit mehr als 260 Zeichen in der System.IO API unterstützt) TLS […]

GZip für WebAPI 2 aktivieren

04.08.2016 10:08:00 | Martin Hey

Mit GZip-Komprimierung kann man einiges an Netzwerklast minimieren - insbesondere, wenn größere Datenmengen übertragen werden sollen, in denen häufig ähnliche Worte vorkommen. Auch wenn das meist genutzte JSON-Format nicht ganz so geschwätzig ist wie beispielsweise SOAP, so kommen doch auch hier beispielsweise die Eigenschaftsnamen in jedem Objekt wieder vor. Wenn man dann eine Liste von Objekten überträgt, so ergibt sich hier ein Einsparungspotenzial.

Im  Gegensatz zu Content-Negotiation, die die Web-API selbst übernimmt, gibt es offenbar keinen Automatismus für die Gzip-Komprimierung. Eine Suche ergab, verschiedene Lösungsansätze - z.B. den von Ben Foster oder den von Radenko Zec. Letzterer ist Vorlage für meine jetzige Lösung geworden.

Schritt 1 - Erzeugen eines ActionFilters, der die Response ändert
public class CompressionAttribute : ActionFilterAttribute
{
    public override async Task OnActionExecutedAsync(HttpActionExecutedContext context, CancellationToken cancellationToken)
    {
        var acceptEncoding = context.Request.Headers.AcceptEncoding;
        var acceptsGzip = acceptEncoding.Contains(new System.Net.Http.Headers.StringWithQualityHeaderValue("gzip"));

        if (!acceptsGzip)
        {
            return;
        }

        var content = context.Response.Content;
        if (content == null)
        {
            return;
        }

        var headers = context.Response.Content.Headers;
        var bytes = await content.ReadAsByteArrayAsync();

        var zlibbedContent = (await Compress(bytes)) ?? new byte[0];
        context.Response.Content = new ByteArrayContent(zlibbedContent);

        foreach (var header in headers)
        {
            if (header.Key.Equals("Content-Length", StringComparison.OrdinalIgnoreCase))
            {
                continue;
            }
            context.Response.Content.Headers.Add(header.Key, header.Value);
        }
        context.Response.Content.Headers.Add("Content-Encoding", "gzip");
    }

    private static async Task Compress(byte[] value)
    {
        if (value == null)
        {
            return null;
        }

        using (var output = new MemoryStream())
        {
            using (var gzipStream = new GZipStream(output, CompressionMode.Compress, CompressionLevel.BestSpeed))
            {
                gzipStream.FlushMode = FlushType.Finish;
                await gzipStream.WriteAsync(value, 0, value.Length);
                   
            }
            return output.ToArray();
        }
    }
}
Um die Komprimierung selbst kümmert sich Ionic.Zlib

Die Implementierung ist an sich recht einfach. Zunächst wird geprüft, ob der Client Gzip-Encoding akzeptiert. Ist das nicht der Fall, dann verändert das Attribut die Ausgabe nicht. Anderenfalls wird die Ausgabe komprimiert und die Response neu aufgebaut. Durch die Neuzuweisung der Response werden auch alle bisherigen Content-Type-Header verworfen, weswegen diese im Nachgang wieder gesetzt werden müssen.

Im Anschluss wird dann dem Client noch mitgeteilt, dass es sich um komprimierten Inhalt handelt.

Schritt 2 - ActionFilter anwenden
Der ActionFilter kann nun verwendet werden, indem eine Web-API-Methode damit annotiert wird oder man aktiviert ihn global für alle Anfragen.
public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        ...

        config.Filters.Add(new CompressionAttribute());

        ...
    }
}
Das passiert in der WebApiConfig.

Add HTTP headers to static files in ASP.​NET Core

03.08.2016 21:00:00 | Jürgen Gutsch

Usually, static files like JavaScript, CSS, images and so on, are cached on the client after the first request. But sometimes, you need to disable the cache or to add a special cache handling.

To provide static files in a ASP.NET Core application, you use the StaticFileMiddleware:

app.UseStaticFiles();

This extension method has two overloads. One of them needs a StaticFileOptions instance, which is our friend in this case. This options has a property called OnPrepareResponse of type Action<StaticFileResponseContext>. Inside this Action, you have access to the HttpContext and many more. Let's see how it looks like to set the cache life time to 12 hours:

app.UseStaticFiles(new StaticFileOptions()
{
    OnPrepareResponse = context =>
    {
        context.Context.Response.Headers["Cache-Control"] = 
                "private, max-age=43200";

        context.Context.Response.Headers["Expires"] = 
                DateTime.UtcNow.AddHours(12).ToString("R");
    }
});

With the StaticFileResponseContext, you also have access to the file of the currently handled file. With this info, it is possible to manipulate the HTTP headers just for a specific file or file type.

This approach ensures, that the client doesn't use pretty much outdated files, but use cached versions while working with it. We use this in a ASP.NET Core single page application, which uses many JavaScript, and HTML template files. In combination with continuous deployment, we need to ensure the Application uses the latest files.

Building a home robot: Part 1 - introduction and head

31.07.2016 17:00:00 | Daniel Springwald

(see all parts of "building a home robot")

Since I was a child and saw the tomy omnibot I was fascinated about robot companions. In the following years I took some tries to get or build one – for example the Sony Aibo in 2001 or my Cobra Robot.

Some weeks ago I started my first serious project of a self driving, self charging home robot - including things like face detection and voice output.

My girlfriend was not sure if it would be a little bit scary to live together with a home robot. So I decided to create a first prank design for the robot head especially for her ;-)


(A scary prank design created from old animatronics parts)

The real head will be much more cute and abstract, using a small monitor instead of physical eyeballs.

As the robot “brain” I choose a Raspberry PI 3 because it is small, fast and with low power consumption.  Other reasons were: cheap and native camera, hardware connections via the PIO port and the available official touchscreen monitor.

To create most of the body parts I want to use my 3D printer.

In the past I often used 3d editor programs like Autodesk 3D Studio or Cinema 4D – but never real CAD programs. After some research for a low cost but useful CAD program I found OpenSCAD.

This free open source tool and its unusual approach to create objects by writing program code suits perfect to my needs.

Only 3 hours later I had finished my first CAD model of a head prototype and started the 3d printer. (I plan to upload all the STL files to thingiverse.com to share them with other makers who want to built their own home robots)

The first prototype of the robot front head:

The following 4 improved prototypes:

An early prototype showing the complete head shape:

The final front part without raspberry pi...

...and with raspberry pi and the touchscreen monitor:


The next step will be the neck design and the motors for moving the head and the neck.

Continue reading: Part 2 - Neck design and movement

C#: BlockingCollection am Beispiel MetroFtpClient

24.07.2016 17:27:25 | Steffen Steinbrecher

Innerhalb des MetroFtpClients (https://github.com/steve600/MetroFtpClient) gibt es eine Warteschlange um die auszuführenden Up- und Downloads zu verwalten. Bei der Abarbeitung der Warteschlange wünscht man sich nun oft einen gewissen Grad an Parallelität um die Performance zu steigern (z.B. mehrere simultane Downloads). Mit .NET 4.0 hat Microsoft einen großen Schritt in diese Richtung getan und den Entwicklern […]

How to continuously deploy a ASP.​NET Core 1.0 web app to Microsoft Azure

21.07.2016 21:00:00 | Jürgen Gutsch

We started the first real world project with ASP.NET Core RC2 a month ago and we learned a lot of new stuff around ASP.NET Core

  • Continuous Deployment to an Azure Web App
  • Token based authentication with Angular2
  • Setup Angular2 & TypeScript in a ASP.NET Core project
  • Entity Framework Core setup and initial database seeding

In this post, I'm going to show you how we setup a continuous deployment stuff for a ASP.NET Core 1.0 project, without tackling TypeScript and Angular2. Please Remember: The tooling around .NET Core and ASP.NET Core is still in "preview" and will definitely change until RTM. I'll try to keep this post up-to-date. I wont use the direct deployment to an Azure Web App from a git repository because of some reasons, I mentioned in a previous post.

I will write some more lines about the other learned stuff in one of the next posts.

Let's start with the build

Building is the easiest part of the entire deployment process. To build a ASP.NET Core 1.0, solution you are able to use MSBuild.exe. Just pass the solution file to MSBuild and it will build all projects in the solution.

The *.xproj files use specific targets, which will wrap and use the dotnet CLI. You are also able to use the dotnet CLI directly. Just call dotnet build for each project, or just simpler: call dotnet build in the solution folder and the tools will recursively go threw all sub-folders, to look for project.json files and build all the projects in the right build order.

Usually I define an output path to build all the projects into a specific folder. This makes it a lot easier for the next step:

Test the code

Some months ago, I wrote about unit testing DNX libraries (Xunit, NUnit). This didn't really change in .NET Core 1.0. Depending on the Test Framework, a test library could be a console application, which can be called directly. In other cases the test runner is called, which gets the test libraries passed as arguments. We use NUnit to create our unit tests, which doesn't provide a separate runner yet for .NET Core. All of the test libraries are console apps and will build to a .exe file. So we are searching the build output folder for our test libraries and call them one by one. We also pass the test output file name to that libraries, to get detailed test results.

This is pretty much all to run the unit tests.

Throw it to the clouds

Deployment was a little more tricky. But we learned how to do it, from the Visual Studio output. If you do a manual publish with Visual Studio, the output window will tell you how the deployment needs to be done. This are just two steps:

###1. publish to a specific folder using the "dotnet publish" command We are calling dotnet publish with this arguments:

Shell.Exec("dotnet", "publish \"" + webPath + "\" --framework net461 --output \"" + 
    publishFolder + "\" --configuration " + buildConf, ".");
  • webPath contains the path to the web project which needs to be deployed
  • publishFolder is the publish target folder
  • buildConf defines the Debug or Release build (we build with Debug in dev environments)

###2. use msdeploy.exe to publish the complete publish folder to a remote machine. The remote machine in our case, is an instance of an Azure Web App, but could also be any other target machine. msdeploy.exe is not a new tool, but is still working, even with ASP.NET Core 1.0.

So we just need to call msdeploy.exe like this:

Shell.Exec(msdeploy, "-source:contentPath=\"" + publishFolder + "\" -dest:contentPath=" + 
    publishWebName + ",ComputerName=" + computerName + ",UserName=" + username + 
    ",Password=" + publishPassword + ",IncludeAcls='False',AuthType='Basic' -verb:sync -" + 
    "enablerule:AppOffline -enableRule:DoNotDeleteRule -retryAttempts:20",".")
  • msdeploy containes the path to the msdeploy.exe which is usually C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe.
  • publishFolder is the publish target folder from the previous command.
  • publishWebName is the name of the Azure Web App name, which also is the target content path.
  • computername is the name/URL of the remote machine. In our case "https://" + publishWebName + ".scm.azurewebsites.net/msdeploy.axd"
  • username and password are the deployment credentials. the password is hashed, as in the publish profile that you can download from Azure. Just copy paste the hashed password.

conclusion

I didn't mention all the work that needs to be done to prepare the web app. We also use Angular2 with TypeScript. So we also need to get all the NPM dependencies, we need to move the needed files to the wwwroot folder and we need to bundle and to minify all the JavaScript files. This is also done in our build & deployment chain. But in this post, it should be enough to describe just the basic steps for a usual ASP.NET Core 1.0 app.

OpenSource: Vorstellung MetroFtpClient

19.07.2016 12:41:08 | Steffen Steinbrecher

In diesem Beitrag möchte ich mal ein kleines Tool für FTP-Zugriffe vorstellen. Als Ausgangsbasis für den MetroFtpClient (https://github.com/steve600/MetroFtpClient) diente das PrismMahAppsSample (https://github.com/steve600/PrismMahAppsSample) und die Standard .NET-Klassen FtpWebRequest/FtpWebRespsonse. Auch für dieses Projekt wurden wieder einige OpenSource-Projekte verwendet. Hier mal eine Übersicht: Dragablz – https://github.com/ButchersBoy/Dragablz MahApps.Metro – https://github.com/MahApps/MahApps.Metro MaterialDesignInXAMLToolkit – https://github.com/ButchersBoy/MaterialDesignInXamlToolkit Newtonsoft.Json – https://github.com/JamesNK/Newtonsoft.Json OxyPlot – https://github.com/oxyplot/oxyplot […]

Visual Studio Code 1.3 - Tabs, Extensions View und mehr Neuigkeiten

14.07.2016 12:30:29 | Kay Giza

Visual Studio Code (VSCode) hat mir Version 1.3 einige entscheidende Verbesserungen und Highlights erhalten. In diesem Blogeintrag möchte ich einige der Neuigkeiten vorstellen. Eine... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Working with user secrets in ASP.​NET Core applications.

10.07.2016 21:00:00 | Jürgen Gutsch

In the past there was a study about critical data in GitHub projects. They wrote a crawler to find passwords, user names and other secret stuff in projects on GitHub. And they found a lot of such data in public projects, even in projects of huge companies, which should pretty much care about security.

The most of this credentials are stored in .config files. For sure, you need to configure the access to a database somewhere, you also need to configure the credentials to storages, mail servers, ftp, what ever. In many cases this credentials are used for development, with lot more rights than the production credentials.

Fact is: Secret information shouldn't be pushed to any public source code repository. Even better: not pushed to any source code repository.

But what is the solution? How should we tell our app where to get this secret information?

On Azure, you are able to configure your settings directly in the application settings of your web app. This overrides the settings of your config file. It doesn't matter if it's a web.config or an appsettings.json.

But we can't do the same on the local development machine. There is no configuration like this. How and where do we save secret credentials?

With .Core, there is something similar now. There is a SecretManager tool, provided by the .NET Core SDK (Microsoft.Extensions.SecretManager.Tools), which you can access with the dotnet CLI.

This tool stores your secrets locally on your machine. This is not a high secure password manager like keypass. It is not really high secure, but on your development machine, it provides the possibility NOT to store your secrets in a config file inside your project. And this is the important thing here.

To use the SecretManager tool, you need to add that tool in the "Tools" section of your project.json, like this:

"Microsoft.Extensions.SecretManager.Tools": {
  "version": "1.0.0-preview2-final",
  "imports": "portable-net45+win8+dnxcore50"
},

Be sure you have a userSecretsId in your project.json. With this ID the SecretManager tool assigns the user secrets to your app:

"userSecretsId": "aspnet-UserSecretDemo-79c563d8-751d-48e5-a5b1-d0ec19e5d2b0",

If you create a new ASP.NET Core project with Visual Studio, the SecretManager tool is already added.

Now you just need to access your secrets inside your app. In a new Visual Studio project, this should also already done and look like this:

public Startup(IHostingEnvironment env)
{
    _hostingEnvironment = env;

    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    if (env.IsDevelopment())
    {
        // For more details on using the user secret store see 
        // http://go.microsoft.com/fwlink/?LinkID=532709
        builder.AddUserSecrets();

        // This will push telemetry data through Application 
        // Insights pipeline faster, allowing you to view results 
        // immediately.
        builder.AddApplicationInsightsSettings(developerMode: true);
    }

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

If not arr a NuGet reference to Microsoft.Extensions.Configuration.UserSecrets 1.0.0 in your project.json and add builder.AddUserSecrets(); as shown here.

The Extension Method AddUserSecrets() loads the secret information of that project into the ConfigurationBuilder. If the keys of the secrets are equal to the keys in the previously defined appsettings.json, the app settings will be overwritten.

If this all is done you are able to use the tool to store new secrets:

dotnet user-secrets set key value

If you create a separate section in your appsettings.config as equal to the existing settings, you need to combine the user secret key with the sections name and the settings name, separated by a colon.

I created settings like this:

"AppSettings": {
    "MySecretKey": "Hallo from AppSettings",
    "MyTopSecretKey": "Hallo from AppSettings"
},

To overwrite the keys with the values from the SecretManager tool, I need to create entries like this:

dotnet user-secrets set AppSettings:MySecretKey "Hello from UserSecretStore"
dotnet user-secrets set AppSettings:MyTopSecretKey "Hello from UserSecretStore"

BTW: to override existing keys with new values, just call set the secret again with the same key and the new value.

This way to handle secret data works pretty fine for me.

The SecretManager tool knows three more commands:

  • dotnet user-secrets clear: removes all secrets from the store
  • dotnet user-secrets list: shows you all existing keys
  • dotnet user-secrets remove <key>: removes the specific key

Just type dotnet user-secrets --help to see more information about the existing commands.

If you need to handle some more secrets in your project, it possibly makes sense to create a small batch file to add the keys, or to share the settings with build and test environments. But never ever push this file to the source code repository ;)

CAKE: Building solutions with C# & Roslyn

09.07.2016 18:15:00 |

x

CAKE - C# Make

  • A DSL for build tasks (e.g. build following projects, copy stuff, deploy stuff etc.)
  • It’s just C# code that gets compiled via Roslyn
  • Active community, OSS & written in C#
  • You can get CAKE via NuGet
  • Before we begin you might want to check out the actual website of CAKE
  • Cross Platform support

Our goal: Building, running tests, package NuGet Packages etc.

I already did a couple of MSBuild and FAKE related blogposts, so if you are interested on these topics as well go ahead (some are quite old, there is a high chance that some pieces might not apply anymore):

Ok… now back to CAKE.

Let’s start with the basics: Building

I created a pretty simple WPF app and followed these instructions.

The build.cake script

My script is a simplified version of this build script:

// ARGUMENTS
var target = Argument("target", "Default");

// TASKS
Task("Restore-NuGet-Packages")
    .Does(() =>
{
    NuGetRestore("CakeExampleWithWpf.sln");
});

Task("Build")
    .IsDependentOn("Restore-NuGet-Packages")
    .Does(() =>
{
      MSBuild("CakeExampleWithWpf.sln", settings =>
        settings.SetConfiguration("Release"));

});

// TASK TARGETS
Task("Default").IsDependentOn("Build");

// EXECUTION
RunTarget(target);

If you know FAKE or MSBuild, this is more or less the same structure. You define tasks, which may depend on other tasks. At the end you invoke one task and the dependency chain will do its work.

Invoke build.cake

The “build.ps1” will invoke “tools/cake.exe” with the input file “build.cake”.

“build.ps1” is just a helper. This Powershell script will download nuget.exe and download the CAKE NuGet-Package and extract it under a /tools folder. If you don’t have problems with binary files in your source control, you don’t need this Powershell script.

Our first CAKE script!

The output is very well formatted and should explain the mechanics behind it good enough:

Time Elapsed 00:00:02.86
Finished executing task: Build

========================================
Default
========================================
Executing task: Default
Finished executing task: Default

Task                          Duration
--------------------------------------------------
Restore-NuGet-Packages        00:00:00.5192250
Build                         00:00:03.1315658
Default                       00:00:00.0113019
--------------------------------------------------
Total:                        00:00:03.6620927

The first steps are pretty easy and it’s much easier than MSBuild and feels good if you know C#.

The super simple intro code can be found on GitHub.

Re-MVPed

08.07.2016 12:06:00 | Jörg Neumann


Mein MVP-Award für den Bereich “Windows Platform Development” wurde wieder um ein Jahr verlängert. Danke Microsoft! 


How web development changed for me over the last 20 years

07.07.2016 21:00:00 | Jürgen Gutsch

The web changed pretty fast within the last 20 years. More and more logic moves from the server side to the client side. More complex JavaScript needs to be written on the client side. And something freaky things happened the last years: JavaScript was moving to the server and Web technology was moving to the desktop. That is nothing new, but who was thinking about that 20 years ago?

The web changed, but also my technology stack. It seems my stack changed back to the roots. 20 years ago, I started with HTML and JavaScript, moving forward to classic ASP using VBScript. In 2001 I started playing around with ASP.NET and VB.NET and used it in in production until the end of 2006. In 2007 I started writing ASP.NET using C#. HTML and JavaScript was still involved, but more or less wrapped in third party controls and jQuery was an alias for JavaScript that time. All about JavaScript was just jQuery. ASP.NET WebForms felled pretty huge and not really flexible, but it worked. Later - in 2010 - I also did many stuff with SilverLight, WinForms, WPF.

ASP.NET MVC came up and the web stuff starts to feel little more naturally again, than ASP.NET WebForms. From an ASP.NET developer perspective, the web changed back to get better, more clean, more flexible, more lightweight and even more naturally.

But there was something new coming up. Things from outside the ASP.NET world. Strong JavaScript libraries, like KnockOut, Backbone and later on Angular and React. The First Single Page Application frameworks (sorry, I don't wanted to mention the crappy ASP.NET Ajax thing...) came up, and the UI logic moves from the server to the client. (Well, we did a pretty cool SPA back in 2005, but we didn't thought about to create a framework out of it.)

NodeJS change the world again, by using JavaScript on the server. You just need two different languages (HTML and JavaScript) to create cool web applications. I didn't really care about NodeJS, except using it in the back, because some tools are based on it. Maybe that was a mistake, who knows... ;)

Now we got ASP.NET Core, which feels a lot more naturally than the classic ASP.NET MVC.

Naturally in this case means, it feels almost the same as writing classic ASP. It means using the stateless web and working with the stateless web, instead of trying to fix it. Working with the Request and Response more directly, than with the classic ASP.NET MVC and even more than in ASP.NET WebForms. It doesn't mean to write the same unstructured, crappy shit than with classic ASP. ;)

Since we got the pretty cool client side JavaScript frameworks and simplified, minimalistic server side frameworks, the server part was reduced to just serve static files and to serve data over RESTish services.

This is the time where it makes sense to have a deeper look into TypeScript. Until now it didn't makes sense to me. I was writing JavaScript for around 20 years, more and less complex scripts, but I never wrote so much JavaScript within a single project, than as I started using AngularJS last years. Angular2 also was the reason to have a deep look into TypeScript, 'cause now it is completely written in Typescript. And it makes absolutely sense to use it.

A few weeks ago I started the first real NodeJS project. A desktop application which uses NodeJS to provide a high flexible scripting run-time for the users. NodeJS provides the functionality and the UI to the users. All written in TypeScript, instead of plain JavaScript. Why? Because TypeScript has a lot of unexpected benefits:

  • You are still able to write JavaScript ;)
  • It helps you to write small modules and structured code
  • it helps you to write NodeJS compatible modules
  • In general you don't need to write all the JavaScript overhead code for every module
  • You will just focus on the features you need to write

This is why TypeScript got a great benefit to me. Sure a typed language is also useful in many cases, but - working with JS for 20 years - I also like the flexibility of the implicit typed JavaScript and I'm pretty familiar with it. that means, from my perspective the Good thing about TypeScript is, I am still able to write implicit typed code in TypeScript and to use the flexibility of JavaScript. This is why I wrote "You are still able to write JavaScript"

The web technology changed, my technology stack changed and the tooling changed. All the stuff goes more lightweight, even the tools. The console comes back and the IDEs changed back to the roots: Just being text editors with some benefits like syntax highlighting and IntelliSense. Currently I prefer to use the "Swiss army knife" Visual Studio Code or Adobe Brackets, depending on the type of project. Both are starting pretty fast and include nice features.

Using that light weight IDEs is pure fun. Everything is fast, because the machines resource could be used by the apps I need to develop, instead by the IDE I need to use to develop the apps. This makes development a lot faster.

Starting the IDE today means, starting cmder (my favorite console on windows). changing to the project folder, starting a console command to watch the typescript files, to compile after save. Starting another console to use the tools like NPM, gulp, typings, dotnet CLI, NodeJS, and so on. Starting my favorite light weight editor to write some code. :)

Using coroutines to create tutorials in Unity 3D

07.07.2016 01:59:00 | Daniel Springwald

When reading the first time about unitys co-routine concept I thought by myself: How can this be useful?

In the meantime I found out that co-routines are one of the most interesting features of unity 3d.

For my actual game project I use them for several purposes; the most useful is to control interactive level tutorials.

The game contains an advisor avatar:

It guides the player through the level and automatically appears when a milestone is reached and the next hint is needed.

Co-routines are the perfect tool to manage this kind of tutorial.

  1. Create a co-routine and run it when the level starts.
  2. If a initial introduction is needed, this should be the first command inside the co-routine.
  3. Create a WHILE loop which waits till the next milestone event happens and contains a “yield return null;”.
  4. What a milestone is depends on the kind of game you are working on. For my actual game project these are “opening a dialogue”, “selecting a specific object” or “reaching a special place in the level”.
  5. When the condition of the milestone becomes TRUE, the WHILE loop will exit. In my game the next command after the WHILE loop invokes the advisor popup to explain the next step.
  6. Then the next WHILE loop to wait for the next milestone follows – and so on. 

Here is an example how such a tutorial code could look like:

    protected IEnumerator HintPlayback(int moneyToAdd, int itemsToBuy)
    {
        yield return new WaitForSeconds(4);

        ShowMessage("Please look at the pending tasks.");

        while (!this.tasks.AreOpen) yield return null;

        ShowMessage(string.Format("Please add some money - at least {0}$.", moneyToAdd));

        while (!this.money < moneyToAdd) yield return null;

        ShowMessage(string.Format("Perfect. Now please buy at least {0} items.", itemsToBuy));

        while (!this.items.count < itemsToBuy) yield return null;

        ShowMessage("You have completed the tutorial.");

        yield break;
    }

 

You can also skip one ore more hints if milestones are skipped: just check for both conditions (for milestone 1 and 2)  in the WHILE loop of milestone 1.

Writing blog posts using Pretzel

05.07.2016 21:00:00 | Jürgen Gutsch

Until yet I wrote more than 30 blog posts with Pretzel and it works pretty well. From my current perspective it was a good decision, to do this huge change, to move to that pretty cool and lightweight system.

I'm using MarkdownPad 2 to write the posts. Writing goes much easier. The process is now simplified and publishing is almost automated. I also added my blog CSS to that editor to have a nice preview.

The process of writing and publishing new posts goes like this:

  1. Creating a new draft article and save it in the _drafts folder
  2. Working on that draft
  3. Move the finished article to the _posts folder
  4. Commit and push that post to GitHub
  5. Around 30 seconds later the post is published on Azure

This process allows me to write offline in the train, while traveling to the Office in Basel. This is the most important thing to me.

The other big change, was switching to English. I now get more readers and feedback from around the world. Now the most readers are from the US, UK, India and Russia. But also from the other European countries, Australia, Middle East (and Cluj in Romania).

Maybe I lost some readers from the German speaking Area (Germany, Switzerland and Austria) who liked to read my posts in German (I need to find a good translation service to integrate) and I got some more from around the world.

Writing feels good in both, English and in the MarkdownPad :) From my perspective it was a good decision to change the blog system and even the language.

to learn more about Pretzel, have look into my previous post about using pretzel.

How to continuously deploy a ASP.​NET Core 1.0 web app to Microsoft Azure

03.07.2016 21:00:00 | Jürgen Gutsch

We started the first real world project with ASP.NET Core RC2 a month ago and we learned a lot of new stuff around ASP.NET Core

  • Continuous Deployment to an Azure Web App
  • Token based authentication with Angular2
  • Setup Angular2 & TypeScript in a ASP.NET Core project
  • Entity Framework Core setup and initial database seeding

In this post, I'm going to show you how we setup a continuous deployment stuff for a ASP.NET Core 1.0 project, without tackling TypeScript and Angular2. Please Remember: The tooling around .NET Core and ASP.NET Core is still in "preview" and will definitely change until RTM. I'll try to keep this post up-to-date. I wont use the direct deployment to an Azure Web App from a git repository because of some reasons, I [mentioned in a previous post] .

I will write some more lines about the other learned stuff in one of the next posts.

Let's start with the build

Building is the easiest part of the entire deployment process. To build a ASP.NET Core 1.0, solution you are able to use MSBuild.exe. Just pass the solution file to MSBuild and it will build all projects in the solution.

The *.xproj files use specific targets, which will wrap and use the dotnet CLI. You are also able to use the dotnet CLI directly. Just call dotnet build for each project, or just simpler: call dotnet build in the solution folder and the tools will recursively go threw all sub-folders, to look for project.json files and build all the projects in the right build order.

Usually I define an output path to build all the projects into a specific folder. This makes it a lot easier for the next step:

Test the code

Some months ago, I wrote about unit testing DNX libraries (Xunit, NUnit). This didn't really change in .NET Core 1.0. Depending on the Test Framework, a test library could be a console application, which can be called directly. In other cases the test runner is called, which gets the test libraries passed as arguments. We use NUnit to create our unit tests, which doesn't provide a separate runner yet for .NET Core. All of the test libraries are console apps and will build to a .exe file. So we are searching the build output folder for our test libraries and call them one by one. We also pass the test output file name to that libraries, to get detailed test results.

This is pretty much all to run the unit tests.

Throw it to the clouds

Deployment was a little more tricky. But we learned how to do it, from the Visual Studio output. If you do a manual publish with Visual Studio, the output window will tell you how the deployment needs to be done. This are just two steps:

###1. publish to a specific folder using the "dotnet publish" command We are calling dotnet publish with this arguments:

Shell.Exec("dotnet", "publish \"" + webPath + "\" --framework net461 --output \"" + 
    publishFolder + "\" --configuration " + buildConf, ".");
  • webPath contains the path to the web project which needs to be deployed
  • publishFolder is the publish target folder
  • buildConf defines the Debug or Release build (we build with Debug in dev environments)

###2. use msdeploy.exe to publish the complete publish folder to a remote machine. The remote machine in our case, is an instance of an Azure Web App, but could also be any other target machine. msdeploy.exe is not a new tool, but is still working, even with ASP.NET Core 1.0.

So we just need to call msdeploy.exe like this:

Shell.Exec(msdeploy, "-source:contentPath=\"" + publishFolder + "\" -dest:contentPath=" + 
    publishWebName + ",ComputerName=" + computerName + ",UserName=" + username + 
    ",Password=" + publishPassword + ",IncludeAcls='False',AuthType='Basic' -verb:sync -" + 
    "enablerule:AppOffline -enableRule:DoNotDeleteRule -retryAttempts:20",".")
  • msdeploy containes the path to the msdeploy.exe which is usually C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe.
  • publishFolder is the publish target folder from the previous command.
  • publishWebName is the name of the Azure Web App name, which also is the target content path.
  • computername is the name/URL of the remote machine. In our case "https://" + publishWebName + ".scm.azurewebsites.net/msdeploy.axd"
  • username and password are the deployment credentials. the password is hashed, as in the publish profile that you can download from Azure. Just copy paste the hashed password.

conclusion

I didn't mention all the work that needs to be done to prepare the web app. We also use Angular2 with TypeScript. So we also need to get all the NPM dependencies, we need to move the needed files to the wwwroot folder and we need to bundle and to minify all the JavaScript files. This is also done in our build & deployment chain. But in this post, it should be enough to describe just the basic steps for a usual ASP.NET Core 1.0 app.

Regeln | Impressum