.
Anmeldung | Registrieren | Hilfe

Blogger

.NET-Blog Archiv

.NET Developer Blogs

Xamarin Exception: Could not load file or assembly mono.posix beim Debugging

26.06.2017 16:03:38 | Andre Kraemer

Nach einem der letzten Updates zu Visual Studio 2017.2 erhielt ich folgende Fehlermeldung beim Debugging einer Android App:

EXCEPTION: Mono.Debugging.Soft.DisconnectedException: The connection with the debugger has been lost. The target application may have exited. ---> System.IO.FileNotFoundException: Can not load 'Mono.Posix, Version=2.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or its dependencies. File not found.
   at Mono.Debugging.Soft.SoftDebuggerSession.ResolveSymbolicLink(String path)
   at Mono.Debugging.Soft.SoftDebuggerSession.PathsAreEqual(String p1, String p2)
   at Mono.Debugging.Soft.SoftDebuggerSession.FindLocationByMethod(MethodMirror method, String file, Int32 line, Int32 column, Boolean& insideTypeRange)
   at Mono.Debugging.Soft.SoftDebuggerSession.FindLocationByType(TypeMirror type, String file, Int32 line, Int32 column, Boolean& genericMethod, Boolean& insideTypeRange)
   at Mono.Debugging.Soft.SoftDebuggerSession.ResolveBreakpoints(TypeMirror type)
   at Mono.Debugging.Soft.SoftDebuggerSession.HandleTypeLoadEvents(TypeLoadEvent[] events)
   at Mono.Debugging.Soft.SoftDebuggerSession.HandleEventSet(EventSet es)
   at Mono.Debugging.Soft.SoftDebuggerSession.EventHandler()

Eine Lösung des Problems war glücklicherweise schnell gefunden. Wie man auf der Xamarin Release Seite nachlesen kann, wird die Datei Mono.Posix.dll tatsächlich durch den Visual Studio Installer nicht mit intstalliert.

Bis das Problem mit einem Update behoben ist hilft folgender Workarround, der aus dem obigen Link entnommen wurde:

  • Download der fehlenden Datei Mono.Posix.
  • Entpacken des Archivs
  • In den Eigenschaften der Mono.Posix.dll prüfen ob die Datei durch Xamarin Inc. signiert wurde
  • Datei ggfs. im Tab Allgmemein der Eigenschaften “Entblocken”
  • Unter: Visual Studio 2017: Datei Mono.Posix.dll in das “Xamarin.VisualStudio” Erweiterungs-Verzeichnis kopieren. Bei mir lag das unter: C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions\Xamarin.VisualStudio
  • Visual Studio beenden und neustarten falls es während des Kopiervorgangs lief.

Das Problem sollte zum Glück nur temporär bestehen, da es im Xamarin Bugzilla bereits auf den Status gelöst gesetzt wurde. Mit Visual Studio 2017.3 sollte sich das Problem also erledigt haben.

GraphQL end-point Middleware for ASP.NET Core

21.06.2017 21:00:00 | Jürgen Gutsch

The feedback about my last blog post about the GraphQL end-point in ASP.NET Core was amazing. That post was mentioned on reddit, many times shared on twitter, lInked on http://asp.net and - I'm pretty glad about that - it was mentioned in the ASP.NET Community Standup.

Because of that and because GraphQL is really awesome, I decided to make the GraphQL MiddleWare available as a NuGet package. I did some small improvements to make this MiddleWare more configurable and more easy to use in the Startup.cs

NuGet

Currently the package is a prerelease version. That means you need to activate to load preview versions of NuGet packages:

  • Package name: GraphQl.AspNetCore
  • Version: 1.0.0-preview1
  • https://www.nuget.org/packages/GraphQl.AspNetCore/

Install via Package Manager Console:

PM> Install-Package GraphQl.AspNetCore -Pre

Install via dotnet CLI:

dotnet add package GraphQl.AspNetCore --version 1.0.0-preview1

Using the library

You still need to configure your GraphQL schema using the graphql-dotnet library, as described in my last post. If this is done open your Startup.cs and add an using to the GraphQl.AspNetCore library:

using GraphQl.AspNetCore;

You can use two different ways to register the GraphQl Middleware:

app.UseGraphQl(new GraphQlMiddlewareOptions
{
  GraphApiUrl = "/graph", // default
  RootGraphType = new BooksQuery(bookRepository),
  FormatOutput = true // default: false
});
app.UseGraphQl(options =>
{
  options.GraphApiUrl = "/graph-api";
  options.RootGraphType = new BooksQuery(bookRepository);
  options.FormatOutput = false; // default
});

Personally I prefer the second way, which is more readable in my opinion.

The root graph type needs to be passed to the GraphQlMiddlewareOptions object, depending on the implementation of your root graph type, you may need to inject the data repository or a EntityFramework DbContext, or whatever you want to use to access your data. In this case I reuse the IBookRepository of the last post and pass it to the BooksQuery which is my root graph type.

I registered the repository like this:

services.AddSingleton<IBookRepository, BookRepository>();

and needed to inject it to the Configure method:

public void Configure(
  IApplicationBuilder app,
  IHostingEnvironment env,
  ILoggerFactory loggerFactory,
  IBookRepository bookRepository)
{
  // ...
}

Another valid option is to also add the BooksQuery to the dependency injection container and inject it to the Configure method.

Options

The GraphQlMiddlewareOptions are pretty simple. Currently there are only three properties to configure

  • RootGraphType: This configures your GraphQL query schema and needs to be set. If this property is unset an ArgumentNullException will be thrown.
  • GraphApiUrl: This property defines your GraphQL endpoint path. The default is set to /graph which means your endpoint is available under //yourdomain.tld/graph
  • FormatOutput: This property defines whether the output is prettified and indented for debugging purposes. The default is set to false.

This should be enough for the first time. If needed it is possible to expose the Newtonsoft.JSON settings, which are used in GraphQL library later on.

One more thing

I would be happy, if you try this library and get me some feedback about it. A demo application to quickly start playing around with it, is available on GitHub. Feel free to raise some issues and to create some PRs on GitHub to improve this MiddleWare.

Slides von der XPC

01.06.2017 17:57:00 | Jörg Neumann

Hier die Slides meines Talks ".NET Standard: One library to rule them all" von der XPC.

Using Visual Studio Code & Team Foundation Version Control (TFVC)

30.05.2017 03:45:00 |

Recently we start working on a Angular 4 app but all other parts of the application (e.g. the backend stuff) were stored in a good old TFVC based repository (inside a Team Foundation Server 2015) . Unfortunately building an Angular app with the full blown Visual Studio with the “default” Team Explorer workflow is not really practical. Another point for using Visual Studio Code was that most other online resources about learning Angular are using VS Code.

Our goal was to keep one repository, otherwise it would be harder to build and maintain.

First plan: Migrate to Git

First we tried to migrate our complete code base to Git with this generally awesome tool. Unfortunately for us it failed because of our quite large branch-tree. I tried it on a smaller code base and it worked without any issues.

At this point we needed another solution, because we wanted to get started on the actual application - so we tried to stick with TFVC.

Important: I always would recommend Git over TFVC, because it’s the way our industry is currently moving and at some point in the future we will do this too.

If you have similar problems like us: Read on!

Second plan: Get the TFVC plugin working in Visual Studio Code

Good news: Since April 2017 there is a Visual Studio Team Services extension for Visual Studio Code that also supports TFVC!

Requirements:

  • Team Foundation Server 2015 Update 2
  • A existing local workspace configuration (at least currently, check this GitHub issue for further information)
  • The actual extension

Be aware: Local Workspaces!

Even I’m using TFS since a couple of years I just recently discovered that the TFS supports to different “workflows”. The “default” workflow always needs a connection to the TFS to checkout files etc. There is an alternative mode called “local” mode which seems to work like SVN. The difference is, that you can create a local file and the TFVC-client will “detect” those changes. Read more about the differences here.

x

Configuration

In our OnPremise TFS 2015 world I just needed only this configuration line in my user settings:

...
"tfvc.location": "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Professional\\Common7\\IDE\\CommonExtensions\\Microsoft\\TeamFoundation\\Team Explorer\\TF.exe",
...

Action!

Now when I point VS Code to my local workspace folder, the TFVC plugin will kick in and I see the familiar “change”-tracking:

x

It is not perfect, because I still need to setup and “manage” (e.g. get the history etc.) via the full blown Visual Studio, but with this setup it is “do-able”.

A first glimpse into .NET Core 2.0 Preview 1 and ASP.​NET Core 2.0.0 Preview 1

29.05.2017 21:00:00 | Jürgen Gutsch

At the Build 2017 conference Microsoft announced the preview 1 versions of .NET Core 2.0, of the .NET Standard 2.0 and ASP.NET Core 2.0. I recently had a quick look into it and want to show you a little bit about it with this post.

.NET Core 2.0 Preview 1

Rich Lander (Program Manager at Microsoft) wrote about the release of the preview 1, .NET Standard 2.0, tools support in this post: Announcing .NET Core 2.0 Preview 1. It is important to read the first part about the requirements carefully. Especially the requirement of Visual Studio 2017 15.3 Preview. At the first quick look I was wondering about the requirement of installing a preview version of Visual Studio 2017, because I have already installed the final version since a few months. But the details is in the numbers. The final version of Visual Studio 2017 is the 15.2. The new tooling for .NET Core 2.0 preview is in the 15.3 which is in preview currently.

So if you want to use .NET Core 2. preview 1 with Visual Studio 2017 you need to install the preview of 15.3

The good thing is, the preview can be installed side by side with the current final of Visual Studio 2017. It doesn't double the usage of disk space, because both versions are able share some SDKs, e.g. the Windows SDK. But you need to install the add-ins you want to use for this version separately.

After the Visual Studio you need to install the new .NET Core SDK which also installs NET Core 2.0 Preview 1 and the .NET CLI.

The .NET CLI

After the new version of .NET Core is installed type dotnet --version in a command prompt. It will show you the version of the currently used .NET SDK:

Wait. I installed a preview 1 version and this is now the default on the entire machine? Yes.

The CLI uses the latest installed SDK on the machine by default. But anyway you are able to run different .NET Core SDKs side by side. To see what versions are installed on our machine type dotnet --info in a command prompt and copy the first part of the base path and past it to a new explorer window:

You are able to use all of them if you want to.

This is possible by adding a "global.json" to your solution folder. This is a pretty small file which defines the SDK version you want to use:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.4"
  }
}

Inside the folder "C:\git\dotnetcore", I added two different folders: the "v104" should use the current final version 1.0.4 and the "v200" should use the preview 1 of 2.0.0. to get it working I just need to put the "global.json" into the "v104" folder:

The SDK

Now I want to have a look into the new SDK. The first thing I do after installing a new version is to type dotnet --help in a command prompt. The first level help doesn't contain any surprises, just the version number differs. The most interesting difference is visible by typing dotnet new --help. We get a new template to add an ASP.NET Core Web App based on Razor pages. We also get the possibility to just add single files, like a razor page, "NuGet.config" or a "Web.Config". This is pretty nice.

I also played around with the SDK by creating a new console app. I typed dotnet new console -n consoleapp:

As you can see in the screenshot dotnet new will directly download the NuGet packages from the package source. It runs dotnet restore for you. It is not a super cool feature but good to know if you get some NuGet restore errors while creating a new app.

When I opened the "consoleapp.csproj", I saw the expected TargetFramework "netcoreapp2.0"

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>

</Project>

This is the only difference between the 2.0.0 preview 1 and the 1.0.4

In ASP.NET Core are a lot more changes done. Let's have a quick look here too:

ASP.NET Core 2.0 Preview 1

Also for the ASP.NET 2.0 Preview 1, Jeffrey T. Fritz (Program Manager for ASP.NET) wrote a pretty detailed announcement post in the webdev blog: Announcing ASP.NET Core 2.0.0-Preview1 and Updates for .NET Web Developers.

To create a new ASP.NET Web App, I need to type dotnet new mvc -n webapp in a command prompt window. This command immediately creates the web app and starts to download the needed packages:

Let's see what changed, starting with the "Program.cs":

public class Program
{
  public static void Main(string[] args)
  {
    BuildWebHost(args).Run();
  }

  public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
      .UseStartup<Startup>()
      .Build();
}

The first thing I mentioned is the encapsulation of the code that creates and configures the WebHostBuilder. In the previous versions it was all in the static void main. But there's no instantiation of the WebHostBuilder anymore. This is hidden in the .CreateDefaultBuilder() method. This look a little cleaner now, but also hides the configuration from the developer. It is anyway possible to use the old way to configure the WebHostBuilder, but this wrapper does a little more than the old configuration. This Method also wraps the configuration of the ConfigurationBuilder and the LoggerFactory. The default configurations were moved from the "Startup.cs" to the .CreateDefaultBuilder(). Let's have a look into the "Startup.cs":

public class Startup
{
  public Startup(IConfiguration configuration)
  {
    Configuration = configuration;
  }

  public IConfiguration Configuration { get; }

  // This method gets called by the runtime. Use this method to add services to the container.
  public void ConfigureServices(IServiceCollection services)
  {
    services.AddMvc();
  }

  // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
  public void Configure(IApplicationBuilder app, IHostingEnvironment env)
  {
    if (env.IsDevelopment())
    {
      app.UseDeveloperExceptionPage();
    }
    else
    {
      app.UseExceptionHandler("/Home/Error");
    }

    app.UseStaticFiles();

    app.UseMvc(routes =>
               {
                 routes.MapRoute(
                   name: "default",
                   template: "{controller=Home}/{action=Index}/{id?}");
               });
  }
}

Even this file is much cleaner now.

But if you now want to customize the Configuration, the Logging and the other stuff, you need to replace the .CreateDefaultBuilder() with the previous style of bootstrapping the application or you need to extend the WebHostBuilder returned by this method. You could have a look into the sources of the WebHost class in the ASP.NET repository on GitHub (around line 150) to see how this is done inside the .CreateDefaultBuilder(). The code of that method looks pretty familiar for someone who already used the previous version.

BTW: BrowserLink was removed from the templates of this preview version. Which is good from my perspective, because it causes an error while starting up the applications.

Result

This is just a first short glimpse into the .NET Core 2.0 Preview 1. I need some more time to play around with it and learn a little more about the upcoming changes. For sure I need to rewrite my post about the custom logging a little bit :)

BTW: Last week, I created a 45 min video about it in German. This is not a video with a good quality. It is quite bad. I just wanted to test a new microphone and Camtasia Studio and I chose ".NET Core 2.0 Preview 1" as the topic to present. Even if it has a awful quality, maybe it is anyway useful to some of my German speaking readers. :)

I'll come with some more .NET 2.0 topics within the next months.

Exploring GraphQL and creating a GraphQL endpoint in ASP.NET Core

28.05.2017 21:00:00 | Jürgen Gutsch

A few weeks ago, I found some time to have a look at GraphQL and even at the .NET implementation of GraphQL. It is pretty amazing to see it in actions and it is easier than expected to create a GraphQL endpoint in ASP.NET Core. In this post I'm going to show you how it works.

The Graph Query Language

The GraphQL was invented by Facebook in 2012 and released to the public in 2015. It is a query language to tell the API exactly about the data you wanna have. This is the difference between REST, where you need to query different resources/URIs to get different data. In GrapgQL there is one single point of access about the data you want to retrieve.

That also makes the planning about the API a little more complex. You need to think about what data you wanna provide and you need to think about how you wanna provide that data.

While playing around with it, I created a small book database. The idea is to provide data about books and authors.

Let's have a look into few examples. The query to get the book number and the name of a specific book looks like this.

{
  book(isbn: "822-5-315140-65-3"){
    isbn,
    name
  }
}

This look similar to JSON but it isn't. The property names are not set in quotes, which means it is not really a JavaScript Object Notation. This query need to be sent inside the body of an POST request to the server.

The Query gets parsed and executed against a data source on the server and the server should send the result back to the client:

{
  "data": {
    "book": {
      "isbn": "822-5-315140-65-3",
      "name": "ultrices enim mauris parturient a"
    }
  }
}

If we want to know something about the author, we need to ask about it:

{
  book(isbn: "822-5-315140-65-3"){
    isbn,
    name,
    author{
      id,
      name,
      birthdate
    }
  }
}

This is the possible result:

{
  "data": {
    "book": {
      "isbn": "822-5-315140-65-3",
      "name": "ultrices enim mauris parturient a",
      "author": {
        "id": 71,
        "name": "Henderson",
        "birthdate": "1937-03-20T06:58:44Z"
      }
    }
  }
}

You need a list of books, including the authors? Just ask for it:

{
  books{
    isbn,
    name,
    author{
      id,
      name,
      birthdate
    }
  }
}

The list is too large? Just limit the result, to get only 20 items:

{
  books(limit: 20) {
    isbn,
    name,
    author{
      id,
      name,
      birthdate
    }
  }
}

Isn't that nice?

To learn more about GraphQL and the specifications, visit http://graphql.org/

The Book Database

The book database is just fake. I love to use GenFu to generate dummy data. So I did the same for the books and the authors and created a BookRepository:

public class BookRepository : IBookRepository
{
  private IEnumerable<Book> _books = new List<Book>();
  private IEnumerable<Author> _authors = new List<Author>();

  public BookRepository()
  {
    GenFu.GenFu.Configure<Author>()
      .Fill(_ => _.Name).AsLastName()
      .Fill(_=>_.Birthdate).AsPastDate();
    _authors = A.ListOf<Author>(40);

    GenFu.GenFu.Configure<Book>()
      .Fill(p => p.Isbn).AsISBN()
      .Fill(p => p.Name).AsLoremIpsumWords(5)
      .Fill(p => p.Author).WithRandom(_authors);
    _books = A.ListOf<Book>(100);
  }

  public IEnumerable<Author> AllAuthors()
  {
    return _authors;
  }

  public IEnumerable<Book> AllBooks()
  {
    return _books;
  }

  public Author AuthorById(int id)
  {
    return _authors.First(_ => _.Id == id);
  }

  public Book BookByIsbn(string isbn)
  {
    return _books.First(_ => _.Isbn == isbn);
  }
}

public static class StringFillerExtensions
{
  public static GenFuConfigurator<T> AsISBN<T>(
    this GenFuStringConfigurator<T> configurator) where T : new()
  {
    var filler = new CustomFiller<string>(
      configurator.PropertyInfo.Name, 
      typeof(T), 
      () =>
      {
        return MakeIsbn();
      });
    configurator.Maggie.RegisterFiller(filler);
    return configurator;
  }
  
  public static string MakeIsbn()
  {
    // 978-1-933988-27-6
    var a = A.Random.Next(100, 999);
    var b = A.Random.Next(1, 9);
    var c = A.Random.Next(100000, 999999);
    var d = A.Random.Next(10, 99);
    var e = A.Random.Next(1, 9);
    return $"{a}-{b}-{c}-{d}-{e}";
  }
}

GenFu provides a useful set of so called fillers to generate data randomly. There are fillers to generate URLs, emails, names, last names, states of US and Canada and so on. I also need a ISBN generator, so I created one by extending the generic GenFuStringConfigurator.

The BookRepository is registered as a singleton in the Dependency Injection container, to work with the same set of data while the application is running. You are able to add some more information to that repository, like publishers and so on.

GraphQL in ASP.NET Core

Fortunately there is a .NET Standard compatible implementation of the GraphQL on GitHub. So there's no need to parse the Queries by yourself. This library is also available as a NuGet package:

<PackageReference Include="GraphQL" Version="0.15.1.678" />

The examples provided on GitHub, are pretty easy. They directly write the result to the output, which means the entire ASP.NET Applications is a GraphQL server. But I want to add GraphQL as a ASP.NET Core MiddleWare, to add the GraphQL implementation as a different part of the Application. Like this you are able to use REST based POST and PUT request to add or update the data and to use the GraphQL to query the data.

I also want that the middleware is listening to the sub path "/graph"

public class GraphQlMiddleware
{
  private readonly RequestDelegate _next;
  private readonly IBookRepository _bookRepository;

  public GraphQlMiddleware(RequestDelegate next, IBookRepository bookRepository)
  {
    _next = next;
    _bookRepository = bookRepository;
  }

  public async Task Invoke(HttpContext httpContext)
  {
    var sent = false;
    if (httpContext.Request.Path.StartsWithSegments("/graph"))
    {
      using (var sr = new StreamReader(httpContext.Request.Body))
      {
        var query = await sr.ReadToEndAsync();
        if (!String.IsNullOrWhiteSpace(query))
        {
          var schema = new Schema { Query = new BooksQuery(_bookRepository) };
          var result = await new DocumentExecuter()
            .ExecuteAsync(options =>
                          {
                            options.Schema = schema;
                            options.Query = query;
                          }).ConfigureAwait(false);
          CheckForErrors(result);
          await WriteResult(httpContext, result);
          sent = true;
        }
      }
    }
    if (!sent)
    {
      await _next(httpContext);
    }
  }

  private async Task WriteResult(HttpContext httpContext, ExecutionResult result)
  {
    var json = new DocumentWriter(indent: true).Write(result);
    httpContext.Response.StatusCode = 200;
    httpContext.Response.ContentType = "application/json";
    await httpContext.Response.WriteAsync(json);
  }

  private void CheckForErrors(ExecutionResult result)
  {
    if (result.Errors?.Count > 0)
    {
      var errors = new List<Exception>();
      foreach (var error in result.Errors)
      {
        var ex = new Exception(error.Message);
        if (error.InnerException != null)
        {
          ex = new Exception(error.Message, error.InnerException);
        }
        errors.Add(ex);
      }
      throw new AggregateException(errors);
    }
  }
}

public static class GraphQlMiddlewareExtensions
{
  public static IApplicationBuilder UseGraphQL(this IApplicationBuilder builder)
  {
    return builder.UseMiddleware<GraphQlMiddleware>();
  }
}

With this kind of MiddleWare, I can extend my applications Startup.cs with GraphQL:

app.UseGraphQL();

As you can see, the BookRepository gets passed into this Middleware via constructor injection. The most important part is that line:

var schema = new Schema { Query = new BooksQuery(_bookRepository) };

This is where we create a schema, which is used by the GraphQL engine to provide the data. The schema defines the structure of the data you wanna provide. This is all done in a root type called BooksQuery. This type gets the BookRepostory.

This Query is a GryphType, provided by the GraphQL library. You need to derive from a ObjectGraphType and to configure the schema in the constructor:

public class BooksQuery : ObjectGraphType
{
  public BooksQuery(IBookRepository bookRepository)
  {
    Field<BookType>("book",
                    arguments: new QueryArguments(
                      new QueryArgument<StringGraphType>() { Name = "isbn" }),
                      resolve: context =>
                      {
                        var id = context.GetArgument<string>("isbn");
                        return bookRepository.BookByIsbn(id);
                      });

    Field<ListGraphType<BookType>>("books",
                                   resolve: context =>
                                   {
                                     return bookRepository.AllBooks();
                                   });
  }
}

Using the GraphQL library all types used in the Query to define the schema are any kind of GraphTypes, even the BookType:

public class BookType : ObjectGraphType<Book>
{
  public BookType()
  {
    Field(x => x.Isbn).Description("The isbn of the book.");
    Field(x => x.Name).Description("The name of the book.");
    Field<AuthorType>("author");
  }
}

The difference is just the generic ObjectGraphType which is also used for the AuthorType. The properties of the Book, which are simple types like the name or the ISBN are mapped directly with the lambda. The complex typed properties like the Author are mapped via another generic ObjectGraphType, which is ObjectGraphType in that case.

Like this you need to create your Schema, which can be used to query the data.

Conclusion

If you want to play around with this demo, I pushed it to a repository on GitHub.

This are my first steps using GraphQL and I really like it. I think this is pretty useful and will reduce the effort on both the client side and the server side a lot. Even if the effort to create the schema is lot more than creating just a Web API controller, but usually you need to create a lot more than just one single Web API controller.

This also reduces the amount of data between the client and the server, because the client could just load the needed data and don't need to GET or POST all unneeded stuff.

I think, I'll use it a lot more in the future projects.

What do you think?

5 kostenlose eBooks über Machine Learning

23.05.2017 13:15:03 | Kazim Bahar

Tagtäglich erscheinen Artikel zu Thema Künstliche Intelligenz bzw. Machine Learning. Was hat das aber mit UI-Design und...

Wissenschaftliche Untersuchungen zur Codequalität

23.05.2017 12:10:43 | Hendrik Loesch

Das die Art und Weise in der wir Code schreiben direkt dessen Verständnis beeinflusst, kann sich jeder vorstellen der schon einmal Code lesen musste in dem keine sprechenden Variablennamen verwendet wurden. Diese Zusammenhänge sind mittlerweile auch Teil von wissenschaftlichen Untersuchungen. Zwei Paper die sich damit beschäftigen möchte ich an dieser Stelle verlinken. Shorter Identifier Names […]

Dateien für automatisierte Tests bereit stellen

22.05.2017 18:48:20 | Hendrik Loesch

Die berühmten Unit Testing Frameworks werden nicht nur für Unit Tests genutzt, sondern auch für automatisierte Tests allgemein und damit auch für Integrationstests. In diesem Zusammenhang wird es dann schnell notwendig, dass auch eine Reihe von Dateien bereit gestellt werden müssen. Dies endet nicht selten darin, dass jene Dateien während des Tests nicht gefunden werden […]

Material von der Seacon

15.05.2017 13:36:00 | Jörg Neumann

Material von der JAX

15.05.2017 13:35:00 | Jörg Neumann

ASP.NET Core in trouble

09.05.2017 21:00:00 | Jürgen Gutsch

ASP.NET Core today

Currently ASP.NET Core - Microsoft's new web framework - can be used on top of .NET Core and on top of the .NET Framework. This fact is pretty nice, because you are able to use all the new features of ASP.NET Core with the power of the huge but well known .NET Framework. On the other hand, the new cross-platform .NET Core is even nice, but with a smaller set of features. Today you have the choice between of being x-plat or to use the full .NET Framework. This isn't really bad.

Actually it could be better. Let's see why:

What is the issue?

Microsoft removed the support of the full .NET Framework for ASP.NET Core 2.0 and some developers are not really happy about that. See this Github Issue thread. ASP.NET Core 2.0 will just run on .NET Core 2.0. This fact results in a hot discussion within that GitHub issue.

It also results in some misleading and confusing headlines and contents on some German IT news publishers:

While the discussion was running, David Fowler said on Twitter that it's the best to think of ASP.NET Core 2.0 and .NET Core 2.0 as the same product.

Does this makes sense?

I followed the discussion and thought a lot about it. And yes, it starts to make sense to me.

NET Standard

What many people don't recognize or just forget about, is the .NET Standard. The .NET Standard is a API definition that tries to unify the APIs of .NET Core, .NET Framework and Xamarin. But it actually does a little more, it provides the API as a set of Assemblies, which forwards the types to the right Framework.

Does it make sense to you? (Read more about the .NET Standard in this documentation)

Currently ASP.NET Core runs on top of .NET Core and .NET Framework, but actually uses a framework that is based on .NET Standard 1.4 and higher. All the referenced libraries, which are used in ASP.NET Core are based on .NET Standard 1.4 or higher. Let's call them ".NET Standard libraries" ;) This libraries contain all the needed features, but doesn't reference a specific platform, but the .NET Standard API.

You are also able to create those kind of libraries with Visual Studio 2017.

By creating such libraries you provide your functionality to multiple platforms like Xamarin, .NET Framework and .NET Core (depending on the .NET Standard Version you choose). Isn't that good?

And in .NET Framework apps you are able to reference .NET Standard based libraries.

About runtimes

.NET Core is just a runtime to run Apps on Linux, Mac and Windows. Let's see the full .NET Framework as a runtime to run WPF apps, Winforms apps and classic ASP.NET apps on Windows. Let's also see Xamarin as a runtime to run apps on iOS and Android.

Let's also assume, that the .NET Standard 2.0 will provide the almost full API of the .NET Framework to your Application, if it is finished.

Do we really need the full .NET Framework for ASP.NET Core, in this case? No, we don't really need it.

What if ...

  • ... .NET Framework, .NET Core and Xamarin are just runtimes?
  • ... .NET Standard 2.0 is as complete as the .NET Framework?
  • .. .NET Standard 2.0 libraries will have the same features as the .NET Framework?
  • .. ASP.NET 2.0 Core uses the .NET Standard 2.0 libraries?

Do we really need the full .NET Framework as a runtime for ASP.NET Core?

I think, no!

Does it also makes sense to use the full .NET Framework as a runtime for Xamarin Apps?

I also think, no.

Conclusion

ASP.NET Core and .NET Core shouldn't be really shipped as one product, as David said. Because it is on top of .NET Core and maybe another technology could also be on top of .NET Core in the future. But maybe it makes sense to ship it as one product, to tell the people that ASP.NET Core 2.0 is based on top of .NET Core 2.0 and needs the .NET Core runtime. (The classic ASP.NET is also shipped with the full .NET Framework.)

  • .NET Core, Xamarin and the full .NET Framework are just a runtimes.
  • The .NET Standard 2.0 will almost have the same API as the .NET Framework.
  • The .NET Standard 2.0 libraries will have the same set of features as the .NET Framework
  • ASP.NET Core 2.0 uses NET Standard 2.0 libraries as a framework.

With this facts, Microsoft's decision to run ASP.NET Core 2.0 on .NET Core 2.0 only, doesn't sound that evil anymore.

From my perspective, ASP.NET is not in trouble and it's all fine and it makes absolutely sense. The troubles are only in the discussion about that on GitHub and on Twitter :)

What do you think? Do you agree?

Let's see what Microsoft will tell us about it at the Microsoft Build conference in the next days. Follow the live stream: https://channel9.msdn.com/Events/Build/2017

[Update 2017-05-11]

Yesterday in an official blog post of announcing ASP.NET Core 2.0 preview1, (Announcing ASP.NET 2.0.0-Preview1 and Updates for .NET Web Developers) Jeff Fritz wrote, that the preview 1 is limited to .NET Core 2.0 only. The overall goal is to put ASP.NET Core 2.0 on top of .NET Standard 2.0. This means it will be possible to run ASP.NET Core 2.0 apps on .NET Core, Mono and the full .NET Framework.

Material vom .NET Summit 2017

08.05.2017 11:08:00 | Jörg Neumann

Der .NET Summit war auch in diesem Jahr wieder ein echtes Highlight!
Hier das Material meiner Sessions:


Das Material meines Workshops "Architekturen für XAML-basierte Apps" bekommen Sie gerne auf Anfrage.

Adding a custom dependency injection container in ASP.NET Core

07.05.2017 21:00:00 | Jürgen Gutsch

ASP.NET Core is pretty flexible, customizable and extendable. You are able to change almost everything. Even the built-in dependency injection container can be replaced. This blog post will show you how to replace the existing DI container with another one. I'm going to use Autofac as a replacement.

Why should I do this?

There are not many reasons to replace the built-in dependency injection container, because it works pretty well for the most cases.

If you prefer a different dependency injection container, because of some reasons, you are able to do it. Maybe you know a faster container, if you like the nice features of Ninject to load dependencies dynamically from an assembly in a specific folder, by file patterns, and so on. I really miss this features in the built in container. It is possible to use another solution to to load dependencies from other libraries, but this is not as dynamic as the Ninject way.

Setup the Startup.cs

In ASP.NET Core the IServiceProvider is the component that resolves and creates the dependencies out of a IServiceCollection. The IServiceCollection needs to be manipulated in the method ConfigureServices within the Startup.cs if you want to add dependencies to the IServiceProvider.

The solution is to read the contents of the IServiceCollections to the own container and to provide an own implementation of a IServiceProvider to the application. Reading the IServiceCollection to the different container isn't that trivial, because you need to translate the different mappings types, which are probably not all available in all containers. E. g. the scoped registration (per request singleton) is a special one, that is only needed in web applications and not implemented in all containers.

Providing a custom IServiceprovider is possible by changing the method ConfigureServices a little bit:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
  // Add framework services.
  services.AddMvc();

  return services.BuildServiceProvider();
}

The method now returns a IServiceprovider, which is created in the last line out of the IServiceCollection. It is needed to add the contents of the service collection to the container you want to use, because ASP.NET actually adds around 40 dependencies before this method is called:

1: Singleton - Microsoft.AspNetCore.Hosting.IHostingEnvironment => Microsoft.AspNetCore.Hosting.Internal.HostingEnvironment
2: Singleton - Microsoft.Extensions.Logging.ILogger`1 => Microsoft.Extensions.Logging.Logger`1
3: Transient - Microsoft.AspNetCore.Hosting.Builder.IApplicationBuilderFactory => Microsoft.AspNetCore.Hosting.Builder.ApplicationBuilderFactory
4: Transient - Microsoft.AspNetCore.Http.IHttpContextFactory => Microsoft.AspNetCore.Http.HttpContextFactory
5: Singleton - Microsoft.Extensions.Options.IOptions`1 => Microsoft.Extensions.Options.OptionsManager`1
6: Singleton - Microsoft.Extensions.Options.IOptionsMonitor`1 => Microsoft.Extensions.Options.OptionsMonitor`1
7: Scoped - Microsoft.Extensions.Options.IOptionsSnapshot`1 => Microsoft.Extensions.Options.OptionsSnapshot`1
8: Transient - Microsoft.AspNetCore.Hosting.IStartupFilter => Microsoft.AspNetCore.Hosting.Internal.AutoRequestServicesStartupFilter
9: Transient - Microsoft.Extensions.DependencyInjection.IServiceProviderFactory`1[[Microsoft.Extensions.DependencyInjection.IServiceCollection, Microsoft.Extensions.DependencyInjection.Abstractions, Version=1.1.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60]] => Microsoft.Extensions.DependencyInjection.DefaultServiceProviderFactory
10: Singleton - Microsoft.Extensions.ObjectPool.ObjectPoolProvider => Microsoft.Extensions.ObjectPool.DefaultObjectPoolProvider
11: Transient - Microsoft.Extensions.Options.IConfigureOptions`1[[Microsoft.AspNetCore.Server.Kestrel.KestrelServerOptions, Microsoft.AspNetCore.Server.Kestrel, Version=1.1.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60]] => Microsoft.AspNetCore.Server.Kestrel.Internal.KestrelServerOptionsSetup
12: Singleton - Microsoft.AspNetCore.Hosting.Server.IServer => Microsoft.AspNetCore.Server.Kestrel.KestrelServer
13: Singleton - Microsoft.AspNetCore.Hosting.IStartup => Microsoft.AspNetCore.Hosting.ConventionBasedStartup
14: Singleton - Microsoft.AspNetCore.Http.IHttpContextAccessor => Microsoft.AspNetCore.Http.HttpContextAccessor
15: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.AzureWebAppRoleEnvironmentTelemetryInitializer
16: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.DomainNameRoleInstanceTelemetryInitializer
17: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.ComponentVersionTelemetryInitializer
18: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.ClientIpHeaderTelemetryInitializer
19: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.OperationIdTelemetryInitializer
20: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.OperationNameTelemetryInitializer
21: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.SyntheticTelemetryInitializer
22: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.WebSessionTelemetryInitializer
23: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.WebUserTelemetryInitializer
24: Singleton - Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer => Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.AspNetCoreEnvironmentTelemetryInitializer
25: Singleton - Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration => Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration
26: Singleton - Microsoft.ApplicationInsights.TelemetryClient => Microsoft.ApplicationInsights.TelemetryClient
27: Singleton - Microsoft.ApplicationInsights.AspNetCore.ApplicationInsightsInitializer => Microsoft.ApplicationInsights.AspNetCore.ApplicationInsightsInitializer
28: Singleton - Microsoft.ApplicationInsights.AspNetCore.DiagnosticListeners.IApplicationInsightDiagnosticListener => Microsoft.ApplicationInsights.AspNetCore.DiagnosticListeners.HostingDiagnosticListener
29: Singleton - Microsoft.ApplicationInsights.AspNetCore.DiagnosticListeners.IApplicationInsightDiagnosticListener => Microsoft.ApplicationInsights.AspNetCore.DiagnosticListeners.MvcDiagnosticsListener
30: Singleton - Microsoft.AspNetCore.Hosting.IStartupFilter => Microsoft.ApplicationInsights.AspNetCore.ApplicationInsightsStartupFilter
31: Singleton - Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet => Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet
32: Singleton - Microsoft.ApplicationInsights.AspNetCore.Logging.DebugLoggerControl => Microsoft.ApplicationInsights.AspNetCore.Logging.DebugLoggerControl
33: Singleton - Microsoft.Extensions.Options.IOptions`1[[Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration, Microsoft.ApplicationInsights, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] => Microsoft.Extensions.DependencyInjection.TelemetryConfigurationOptions
34: Singleton - Microsoft.Extensions.Options.IConfigureOptions`1[[Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration, Microsoft.ApplicationInsights, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] => Microsoft.Extensions.DependencyInjection.TelemetryConfigurationOptionsSetup
35: Singleton - Microsoft.Extensions.Options.IConfigureOptions`1[[Microsoft.ApplicationInsights.AspNetCore.Extensions.ApplicationInsightsServiceOptions, Microsoft.ApplicationInsights.AspNetCore, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] => Microsoft.AspNetCore.Hosting.DefaultApplicationInsightsServiceConfigureOptions
36: Singleton - Microsoft.Extensions.Logging.ILoggerFactory => Microsoft.Extensions.Logging.LoggerFactory
37: Singleton - System.Diagnostics.DiagnosticListener => System.Diagnostics.DiagnosticListener
38: Singleton - System.Diagnostics.DiagnosticSource => System.Diagnostics.DiagnosticListener
39: Singleton - Microsoft.AspNetCore.Hosting.IApplicationLifetime => Microsoft.AspNetCore.Hosting.Internal.ApplicationLifetime

140 more services gets added by the AddMvc() method. And even more, if you want to use more components and frameworks, like Identity and Entity Framework Core.

Because of that, you should use the common way to add framework services to the IServiceCollection and read the added services to the other container afterwards.

The next lines with dummy code, shows you how the implementation could be look like:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
  // Add framework services.  
  services.AddDbContext<ApplicationDbContext>(options =>
    options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

  services.AddIdentity<ApplicationUser, IdentityRole>()
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

  services.AddMvc();
  services.AddOtherStuff();

  // create custom container
  var container = new CustomContainer();
  
  // read service collection to the custom container
  container.RegisterFromServiceCollection(services);

  // use and configure the custom container
  container.RegisterSingelton<IProvider, MyProvider>();

  // creating the IServiceProvider out of the custom container
  return container.BuildServiceProvider();
}

The details of the implementation depends on how the container works. E. g. If I'm right, Laurent Bugnion's SimpleIOC already is a IServiceProvider and could be returned directly. Let's see how this works with Autofac:

Replacing with Autofac

Autofac provides an extension library to support this container in ASP.NET Core projects. I added both the container and the extension library packages from NuGet:

Autofac, 4.5.0
Autofac.Extensions.DependencyInjection, 4.1.0

I also added the related usings to the Startup.cs:

using Autofac;
using Autofac.Extensions.DependencyInjection;

Now I'm able to create the Autofac container in the ConfigureServices method:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
  // Add framework services.  
  services.AddDbContext<ApplicationDbContext>(options =>
    options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

  services.AddIdentity<ApplicationUser, IdentityRole>()
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

  services.AddMvc();
  services.AddOtherStuff();
  
  // create a Autofac container builder
  var builder = new ContainerBuilder();

  // read service collection to Autofac
  builder.Populate(services);

  // use and configure Autofac
  builder.RegisterType<MyProvider>().As<IProvider>();

  // build the Autofac container
  ApplicationContainer = builder.Build();
  
  // creating the IServiceProvider out of the Autofac container
  return new AutofacServiceProvider(ApplicationContainer);
}

// IContainer instance in the Startup class 
public IContainer ApplicationContainer { get; private set; }

With this implementation, Autofac is used as the dependency injection container in this ASP.NET application.

If you also want to resolve the controllers from the container, you should add this to the container too Otherwise the framework will resolve the Controllers and some special DI cases are not possible. A small call adds the Controllers to the IServiceColection:

services.AddMvc().AddControllersAsServices();

That's it.

More about Autofac: http://docs.autofac.org/en/latest/integration/aspnetcore.html

Conclusion

Fortunately Autofac supports the .NET Standard 1.6 and there is this nice extension library to get it working in ASP.NET too. Some other containers don't and it needs some more effort to get it running.

How to add custom logging in ASP.NET Core

04.05.2017 21:00:00 | Jürgen Gutsch

ASP.NET Core is pretty flexible, customizable and extendable. You are able to change almost everything. Even the logging. If you don't like the built-in logging, you are able to plug in your own logger or an existing logger like log4net, NLog, Elmah. In this post I'm going to show you how to add a custom logger.

The logger I show you, just writes out to the console, but just for one single log level. The feature is to configure different font colors per LogLevel. So this logger is called ColoredConsoleLogger.

General

To add a custom logger, you need to add an ILoggerProvider to the ILoggerFactory, that is provided in the method Configure in the Startup.cs:

loggerFactory.AddProvider(new CustomLoggerProvider(new CustomLoggerConfiguration()));

The ILoggerProvider creates one or more ILogger which are used by the framework to log the information.

The Configuration

The idea is, to create different colored console entries per log level and event ID. To configure this we need a configuration type like this:

public class ColoredConsoleLoggerConfiguration
{
  public LogLevel LogLevel { get; set; } = LogLevel.Warning;
  public int EventId { get; set; } = 0;
  public ConsoleColor Color { get; set; } = ConsoleColor.Yellow;
}

This sets the default level to Warning and the color to Yellow. If the EventId is set to 0, we will log all events.

The Logger

The logger gets a name and the configuration passed in via the constructor. The name is the category name, which usually is the logging source, eg. the type where the logger is created in:

public class ColoredConsoleLogger : ILogger
{
  private readonly string _name;
  private readonly ColoredConsoleLoggerConfiguration _config;

  public ColoredConsoleLogger(string name, ColoredConsoleLoggerConfiguration config)
  {
    _name = name;
    _config = config;
  }

  public IDisposable BeginScope<TState>(TState state)
  {
    return null;
  }

  public bool IsEnabled(LogLevel logLevel)
  {
    return logLevel == _config.LogLevel;
  }

  public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
  {
    if (!IsEnabled(logLevel))
    {
      return;
    }

    if (_config.EventId == 0 || _config.EventId == eventId.Id)
    {
      var color = Console.ForegroundColor;
      Console.ForegroundColor = _config.Color;
      Console.WriteLine($"{logLevel.ToString()} - {eventId.Id} - {_name} - {formatter(state, exception)}");
      Console.ForegroundColor = color;
    }
  }
}

We are going to create a logger instance per category name with the provider.

The LoggerProvider

The LoggerProvider is the guy who creates the logger instances. Maybe it is not needed to create a logger instance per category, but this makes sense for some Loggers, like NLog or log4net. Doing this you are also able to choose different logging output targets per category if needed:

  public class ColoredConsoleLoggerProvider : ILoggerProvider
  {
    private readonly ColoredConsoleLoggerConfiguration _config;
    private readonly ConcurrentDictionary<string, ColoredConsoleLogger> _loggers = new ConcurrentDictionary<string, ColoredConsoleLogger>();

    public ColoredConsoleLoggerProvider(ColoredConsoleLoggerConfiguration config)
    {
      _config = config;
    }

    public ILogger CreateLogger(string categoryName)
    {
      return _loggers.GetOrAdd(categoryName, name => new ColoredConsoleLogger(name, _config));
    }

    public void Dispose()
    {
      _loggers.Clear();
    }
  }

There's no magic here. The method CreateLogger creates a single instance of the ColoredConsoleLogger per category name and stores it in the dictionary.

Usage

Now we are able to use the logger in the Startup.cs

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
  loggerFactory.AddConsole(Configuration.GetSection("Logging"));
  loggerFactory.AddDebug();
  // here is our CustomLogger
  loggerFactory.AddProvider(new ColoredConsoleLoggerProvider(new ColoredConsoleLoggerConfiguration
  {
    LogLevel = LogLevel.Information,
    Color = ConsoleColor.Blue
  }));
  loggerFactory.AddProvider(new ColoredConsoleLoggerProvider(new ColoredConsoleLoggerConfiguration
  {
    LogLevel = LogLevel.Debug,
    Color = ConsoleColor.Gray
  }));

But this doesn't really look nice from my point of view. I want to use something like this:

loggerFactory.AddColoredConsoleLogger(c =>
{
  c.LogLevel = LogLevel.Information;
  c.Color = ConsoleColor.Blue;
});
loggerFactory.AddColoredConsoleLogger(c =>
{
  c.LogLevel = LogLevel.Debug;
 c.Color = ConsoleColor.Gray;
});

This means we need to write at least one extension method for the ILoggerFactory:

public static class ColoredConsoleLoggerExtensions
{
  public static ILoggerFactory AddColoredConsoleLogger(this ILoggerFactory loggerFactory, ColoredConsoleLoggerConfiguration config)
  {
    loggerFactory.AddProvider(new ColoredConsoleLoggerProvider(config));
    return loggerFactory;
  }
  public static ILoggerFactory AddColoredConsoleLogger(this ILoggerFactory loggerFactory)
  {
    var config = new ColoredConsoleLoggerConfiguration();
    return loggerFactory.AddColoredConsoleLogger(config);
  }
  public static ILoggerFactory AddColoredConsoleLogger(this ILoggerFactory loggerFactory, Action<ColoredConsoleLoggerConfiguration> configure)
  {
    var config = new ColoredConsoleLoggerConfiguration();
    configure(config);
    return loggerFactory.AddColoredConsoleLogger(config);
  }
}

With this extension methods we are able to pass in an already defined configuration object, we can use the default configuration or use the configure Action as shown in the previous example:

loggerFactory.AddColoredConsoleLogger();
loggerFactory.AddColoredConsoleLogger(new ColoredConsoleLoggerConfiguration
{
  LogLevel = LogLevel.Debug,
  Color = ConsoleColor.Gray
});
loggerFactory.AddColoredConsoleLogger(c =>
{
  c.LogLevel = LogLevel.Information;
  c.Color = ConsoleColor.Blue;
});

Conclusion

This is how the output of that nonsense logger looks:

Now it's up to you to create a logger that writes the entries to a database, log file or whatever or just add an existing logger to your ASP.NET Core application.

Error while starting Docker for Windows

27.04.2017 21:00:00 | Jürgen Gutsch

The last couple of months I wanted to play around with Docker for Windows. It worked just twice. Once at the first try for just one or two weeks. Then I got an error, when Docker tries to initialize right after Windows starts. After I reinstalled Docker for Windows it runs the second time for a two or three weeks. I tried to reinstall all that stuff but I didn't get it running again on my machine.

The error shown on this dialog is not really meaningful:

Object reference not set to an instance of an object...

Even the log didn't really help:

Version: 17.03.1-ce-win5 (10743)
Channel: stable
Sha1: b18e2a50ccf296bcd637b330c0ca9faaab9d790c
Started on: 2017/04/28 21:49:37.965
Resources: C:\Program Files\Docker\Docker\Resources
OS: Windows 10 Pro
Edition: Professional
Id: 1607
Build: 14393
BuildLabName: 14393.1066.amd64fre.rs1_release_sec.170327-1835
File: C:\Users\Juergen\AppData\Local\Docker\log.txt
CommandLine: "Docker for Windows.exe"
You can send feedback, including this log file, at https://github.com/docker/for-win/issues
[21:49:38.182][GUI            ][Info   ] Starting...
[21:49:38.669][GUI            ][Error  ] Object reference not set to an instance of an object.
[21:49:45.081][ErrorReportWindow][Info   ] Open logs

Today I found some time to search for a solution and fortunately I'm not the only one who faced this error. I found an issue on GitHub which described exactly this behavior: https://github.com/docker/for-win/issues/464#issuecomment-277700901

The solution is to delete all the files inside of that folder:

C:\Users\<UserName>\AppData\Roaming\Docker\

Now I just needed to restart docket for Windows, by calling the Docker for Windows.exe in C:\Program Files\Docker\Docker\

Finally I can continue playing around with Docker :)

.NET: Visual Studio Code und .NET Core

25.04.2017 20:34:08 | Steffen Steinbrecher

Bei Visual Studio Code handelt es sich um einen kostenlosen und quelloffenen Texteditor, der plattformübergreifend für die Betriebssystem Windows, macOS und Linux verfügbar ist. Bis auf den Namen und einigen Funktionen wie Debugging, IntelliSense und Versionsverwaltung hat Visual Studio Code nichts mit Visual Studio gemeinsam. Im Unterschied zu Visual Studio arbeitet Visual Studio Code nicht […]

Mobil bis ins hohe Alter: Microsoft HoloLens unterstützt thyssenkrupp bei der Produktion individuell gefertigter Treppenlifts

24.04.2017 15:11:53 | Mathias Gronau

Praxisbeispiele für den Einsatz der HoloLens kommen meist aus dem Produktionsumfeld. Die Montage und Reparatur komplexer Maschinen wird unterstützt. Auf der CeBIT zeigt Microsoft gemeinsam mit thyssenkrupp ein Beispiel für den Einsatz der HoloLens, bei dem die Endverbraucher direkt von dieser Technologie profitieren können. Um maßgeschneiderte Home-Mobility-Lösungen liefern zu können, setzt thyssenkrupp auf Microsoft HoloLens. Die Mixed-Reality-Brille ermöglicht Kunden in Echtzeit die Visualisierung eines Treppenlift-Produkts im eigenen Zuhause. Durch die Beschleunigung verschiedener Prozessschritte sind zudem bis zu vier Mal schnellere Lieferzeiten möglich. Microsoft HoloLens wird damit bei thyssenkrupp zu einem wesentlichen Werkzeug bei der Bereitstellung individueller Mobilitätslösungen in den eigenen vier Wänden. Der Bedarf nach solchen Lösungen wird rasant steigen: Neben der schnell wachsenden städtischen Bevölkerung tragen auch ein besseres Gesundheitswesen und ein höherer Lebensstandard zu einer längeren Lebenserwartung bei. Es wird erwartet, dass die globale Bevölkerung von derzeit 7,4 Milliarden bis 2100 auf über 11 Milliarden anwächst – und altert. Im Jahr 2016 war weltweit bereits jeder achte Mensch über 60 Jahre alt. Für Deutschland berechnet das Statistische Bundesamt, dass im Jahr 2060 69 Prozent der Bevölkerung im Rentenalter sein werden. Diese Zahlen verdeutlichen, dass die Mobilität älterer Menschen in Städten eine wachsende Herausforderung darstellt.1 Mobilitätslösungen wie Treppenlifts tragen bereits heute zu einer höheren Lebensqualität älterer Menschen bei, die damit länger und unabhängiger in ihrem eigenen Zuhause leben können. Allerdings führen einige Aspekte bei der Gestaltung und Bereitstellung von Home-Mobility-Lösungen zu hoher Komplexität: Jede häusliche Situation hat ihre Besonderheiten, daher ist jeder Treppenlift im Prinzip ein Einzelstück, das speziell auf individuelle Anforderungen zugeschnitten ist. Zudem erwerben die meisten Kunden ein solches Produkt erst, wenn ihre Mobilität bereits stark eingeschränkt ist. Dadurch wird der Kauf zu einer emotionalen Erfahrung, bei der sie ihre neuen Einschränkungen anerkennen müssen. Oft wollen die Kunden aus dieser Situation heraus ihre Käufe sofort geliefert bekommen; das heißt, es ist Eile geboten. Microsoft HoloLens wird den Kunden maßgeblich helfen, Mobilitätslösungen individuell zu gestalten. Treppen können nun sofort unter der Berücksichtigung von Ergonomie und Hindernissen wie Heizungen, Licht – und Elektrohalterungen oder der Nähe zur Wand usw. vermessen werden. Kunden können bequem auf einem Tablet sehen, wie der Treppenlift auf ihrer Treppe aussehen wird, und Entscheidungen zu Polsterung, Farbe von Stuhl und Schienen sowie maßgeschneiderten Besonderheiten treffen. So erhalten sie ein Produkt, das in das Erscheinungsbild ihres Zuhauses passt. „Neue Realitäten erfordern neue Lösungen“, kommentiert Andreas Schierenbeck, CEO von thyssenkrupp Elevator. „Wir sehen in der Microsoft HoloLens einen Wegbereiter, der uns dabei unterstützt, das Kundenerlebnis für Home-Lösungen neu zu erfinden. Damit tragen wir dazu bei, kontinuierliche Lebensqualität für die alternde Bevölkerung bereitzustellen – unabhängig von ihren Mobilitätseinschränkungen.“ thyssenkrupp hat Microsoft HoloLens für den häuslichen Bereich bereits bei mehr als 100 Kunden in Holland, Spanien und Deutschland mit exzellenten Ergebnissen und positivem Kundenfeedback eingesetzt. Als nächster Schritt steht die deutschlandweite Einführung an.

.NET CultureInfo in Windows 10

24.04.2017 03:45:00 |

Did you know that the CultureInfo behavior with “unkown” cultures has changed with Windows 10?

I stumbled two times about this “problem” - so this is enough to write a short blogpost about it.

Demo Code

Lets use this democode:

    try
    {


        // ok on Win10, but not on pre Win10 if culture is not registred
        CultureInfo culture1 = new CultureInfo("foo");
        CultureInfo culture2 = new CultureInfo("xyz");
        CultureInfo culture3 = new CultureInfo("en-xy");

        // not ok even on Win 10 - exception
        CultureInfo culture4 = new CultureInfo("foox");

    }
    catch (Exception exc)
    {

    }

Windows 10 Case

If you run this code under Windows 10 it should fail for the “foox” culture, because it doesn’t seem to be a valid culture anyway.

“culture1”, “culture2”, “culture3” are all valid cultures in the Windows 10 world, but are resolved with Unkown Locale and LCID 4096.

I guess Windows will look for a 2 or 3 letter ISO style language, and “foox” doesn’t match this pattern.

Pre Windows 10 - e.g. running on Win Server 2012R2

If you would run the code unter Windows Server 2012 R2 it would fail on the first culture, because there is no “foo” culture registred.

“Problem”

The main “problem” is that this behavior could lead to some production issues if you develop with Windows 10 and the software is running on a Win 2012 server.

If you are managing “language” content in your application, be aware of this “limitation” on older Windows versions.

I discovered this problem while debugging our backend admin application. With this ASP.NET frontend it is possible to add or manage “localized” content and the dropdown for the possible language listed a whole bunch of very special, but “Unkown locale” cultures. So we needed to filter out all LCID 4096 cultures to ensure it would run under all Windows versions.

MSDN

This behavior is also documented on MSDN

The “Unkown culture” LCID 4096 was introduced with Windows Vista, but only with Windows 10 it will be “easy” usable within the .NET Framework.

12 Open Source UI-Frameworks für WPF-Anwendungen

21.04.2017 23:48:11 | Kazim Bahar

Auf der mittlerweile populärsten Sourcecode-Verwaltungsplattform GitHub befinden sich etliche frei verfügbare UI-Frameworks für .NET Anwendungen. Die hier aufgelisteten Frameworks für WPF sind...

Thorsten Herrmann übernimmt bei Microsoft das Großkunden-Geschäft

20.04.2017 12:09:06 | Mathias Gronau

Thorsten Herrmann (49) ist am 1. April in die Geschäftsleitung von Microsoft Deutschland eingetreten und wird ab 1. Juli für das Großkunden- und Partnergeschäft in Deutschland verantwortlich zeichnen. Thomas Herrmann berichtet unmittelbar an die Vorsitzende der Geschäftsführung von Microsoft Deutschland, Sabine Bendiek. Thorsten Herrmann übernimmt damit die Aufgaben von Alexander Stüger, der die Position seit dem Ausscheiden von Alastair Bruce im Juli 2016 interimistisch bekleidete. Alexander Stügers neue Aufgabe in der Microsoft Organisation wird zu einem späteren Zeitpunkt bekannt gegeben. Thorsten Herrmann kommt von Hewlett Packard Enterprise (HPE) zu Microsoft. Bei HPE war er in den letzten drei Jahren als Vice-President für die weltweiten Geschäftsbeziehungen zwischen HP (später HPE) und SAP zuständig. Der studierte Wirtschaftsinformatiker startete seine Karriere 1989 bei IBM. 1997 wechselte er zu Compaq Computer GmbH in den Vertrieb für SAP R/3-Infrastrukturlösungen und wurde später Regionalvertriebsleiter. Seit dem Merger mit HP war Thorsten Herrmann in verschiedenen leitenden Vertriebspositionen tätig ehe er im April 2009 in die Geschäftsführung der Hewlett Packard GmbH berufen wurde. In dieser Rolle übernahm er als Vice President die Verantwortung für den Großkundenvertrieb bei HP Deutschland. „Auf der Hannover Messe in der kommenden Woche sehen wir, in welcher Geschwindigkeit die Digitalisierung klassischer Industrien vorangeht. Microsoft leistet nach meiner Beobachtung einen substanziellen Beitrag, die Industrieunternehmen bei ihrer digitalen Transformation wirksam zu unterstützen. Dabei bringe ich meine Erfahrungen und Expertise jetzt sehr gerne ein“, sagte Thorsten Herrmann vor Mitarbeitern. „Mit Thorsten Herrmann konnten wir einen Top-Manager mit einer umfassenden Vertriebs-Expertise und Marktkenntnis für uns gewinnen und ich freue mich sehr auf die persönliche Zusammenarbeit“, kommentierte Sabine Bendiek die Berufung des neuen Geschäftsleitungs-Kollegen.

A first glimpse into CAKE

19.04.2017 21:00:00 | Jürgen Gutsch

Since a couple of years I use FAKE (C# Make) to configure my builds. Also at the YooApps we use FAKE in some projects. One of the projects uses it since more than two years. FAKE is really great and I love to use it, but there is one problem with it: The most C# developers don't really like to use new things. The worst case for the most C# developers - it seems - is a new tool, that uses an exotic language like F#.

This is why I have to maintain the FAKE build scripts, since I introduced FAKE to the team.

It is that new tool and the F# language that must be scary for them, even if they don't really need to learn F# for the most common scenarios. That's why I asked the fellow developers to use CAKE (C# Make).

  • It is C# make instead of F# make
  • It looks pretty similar
  • It works the same way
  • It is a scripting language
  • It works almost everywhere

They really liked the idea to use CAKE. Why? just because of C#? It seems so...

It doesn't really makes sense to me, but anyway, it makes absolutely sense that the developers need to use and to maintain there own build configurations.

How does CAKE work?

CAKE is built using a C# scripting language. It uses the Roslyn compiler to compile the scripts. Instead of using batch files, as FAKE does, it uses a PowerShell script (build.ps1) to bootstrap itself and to run the build script. The bootstrapping step loads CAKE and some dependencies using NuGet. The last step the PowerShell script does, is to call the cake.exe and to execute the build script.

The bootstrapping needs network access, to load all the stuff. It also loads the nuget.exe, if it's not available. If you don't like this, you can also commit the loaded dependencies to the source code repository.

The documentation is great. Just follow the getting-started guide to get a working example. There's also a nice documentation about setting up a new project available.

Configuring the build

If you know FAKE or even MSBuild it will look pretty familiar to you. Let's have a quick look into the first simple example of the getting started guide:

var target = Argument("target", "Default");

Task("Default")
  .Does(() =>
        {
          Information("Hello World!");
        });

RunTarget(target);

The first line retrieves the build target to execute from the command line arguments. Starting from line 3 we see a definition of a build target. This target just prints a "Hello World!" as a information message.

The last line starts the initial target by its name.

A more concrete code sample is the build script from the CAKE example (I removed some lines in this listing to get a shorter example):

#tool nuget:?package=NUnit.ConsoleRunner&version=3.4.0

var target = Argument("target", "Default");
var configuration = Argument("configuration", "Release");

// Define the build directory.
var buildDir = Directory("./src/Example/bin") + Directory(configuration);

Task("Clean")
  .Does(() =>
        {
          CleanDirectory(buildDir);
        });

Task("Restore-NuGet-Packages")
  .IsDependentOn("Clean")
  .Does(() =>
        {
          NuGetRestore("./src/Example.sln");
        });

Task("Build")
  .IsDependentOn("Restore-NuGet-Packages")
  .Does(() =>
        {
          MSBuild("./src/Example.sln", settings =>
                  settings.SetConfiguration(configuration));
        });

Task("Run-Unit-Tests")
  .IsDependentOn("Build")
  .Does(() =>
        {
          NUnit3("./src/**/bin/" + configuration + "/*.Tests.dll", new NUnit3Settings {
            NoResults = true
          });
        });

Task("Default")
  .IsDependentOn("Run-Unit-Tests");

RunTarget(target);

This script uses another NuGet package to run the NUnit3 tests and references it. A nice feature is to configure the NuGet dependency at the beginning of the script.

This build script contains five targets. The method IsDependentOn("") wires the targets together in the right execution order. This way is a bit different to FAKE and maybe a little bit confusing. It needs to write the targets in the right execution order. If you don't write the script like this, you need to find the initial target and to follow the way back to the very first target. You will read the execution order from the last to the first target.

FAKE does this a little easier and wires the targets up in a single statement at the end of the file:

// Dependencies
"Clean"
  ==> "Restore-NuGet-Packages"
  ==> "Build"
  ==> "Run-Unit-Tests"
  ==> "Default"
 
RunTargetOrDefault "Default"

This could possibly look like this dummy code in CAKE:

// Dependencies
WireUp("Clean")
  .Calls("Restore-NuGet-Packages")
  .Calls("Build")
  .Calls("Run-Unit-Tests")
  .Calls("Default");

RunTarget(target);

Running the build

To run the build, just call .\build.ps1 in a PowerShell console:

If you know FAKE, the results look pretty familiar:

Conclusion

Anyway. I think CAKE gets pretty much faster accepted by the fellow developers at the YooApps than FAKE did. Some things will work a little easier in CAKE than in FAKE and some a little different, but the most stuff will work the same way. So it seems it makes sense to switch to use CAKE at the YooApps. So let's use it. :)

I'm sure, I will write down a comparison of FAKE and CAKE later, if I have used it for a few months.

Please use a password manager

17.04.2017 21:00:00 | Jürgen Gutsch

Since years, I'm using a password manager to store and manage all my credentials for all the accounts I use. The usage of such a tool is pretty common for me and it is a normal flow for me to create a new entry in the password manager first before I create the actual account.

I'm always amazed, if I meet people, who don't use any tool to store and manage their credentials. They use the same password or the same bunch of passwords everywhere. This is dangerous and most of them already know about that. Maybe they are to lazy to spend one or two ours to think about the benefits and to search for the right tool or they simply don't know about such tools

For me the key benefits are pretty clear:

  • I don't need to remember all the passwords.
  • I don't need to think about secure passwords while creating new one
  • I just need to remember one single passphrase

Longer passwords are more secure than short ones, even if the short ones include special characters, numbers and upper case characters. But longer passwords are pretty hard to remember. This is why Edward Snowden proposed to use passphrases instead of passwords. Using passphrases like this, are easy to remember. But would it really makes sense to create a different pass-phrase for every credentials you use? I don't think so.

I just created a single pass-phrase which is used with the password manager and I use the password manager to generate the long passwords for me. Most of the generated password are like this:

I4:rs/pQO8Ir/tSk*`1_KSG"~to-xMI/Gf;bNP7Qi3s8RuqIzl0r=-JL727!cgwq

The tool I use is KeePass 2, which is available on almost all devices and OS I use. As small desktop app on Windows, Mac and Linux as an app on Windows phones and Android. (Probably on iOS too, but I don't use such a device). KeePass stores the passwords on a encrypted file. This file could be stored on a USB drive or on a shared folder in the cloud. I use the cloud way to share the file to every device. The cloud store itself is secured with a password I don't know and which is stored in that KeePass file.

What the F***, you really put the key to lock your house into a safe which is inside your house? Yes!

The KeyPass file is synced offline and offline accessible on all devices. If the file changes it will be synched to all devices. I'm still able to access all credentials.

With the most password mangers, you are able to copy the username and the password, using shortcuts to the login forms. Some of them are able to automatically fill in the credentials. KeePass uses the clipboard, but deletes the values from the clipboard after a couple of seconds. Like this, no one else can access or reuse the credentials within the clipboard.

There are many more password manager out there. But KeePass - with the encrypted file - works best for me. I'm responsible to store the file on a save location and I can do that wherever I want.

Summary

Use a password manager! Spend the time to try the tools and choose the tools that fits you best. It is important! Once you use a password manager, you never want to work without one. It will make you live easier and hopefully more secure.

Blogging with Git Flow

02.04.2017 21:00:00 | Jürgen Gutsch

Since a while we use Git Flow in our projects at the YooApps. Git Flow is a add-in for Git, that helps us to follow the feature branch process in a standard way. We usually use our Jira ticket number as names for features, bugs or hotfix and we use the Jira Version as a Release name. This makes absolutely sense to us and makes the flow pretty transparent. We also use BitBucket to host our Git repositories, which is directly linked to Jira. That means on every Pull Request we have the direct link to the Jira ticket.

Using Git Flow to release new posts

My idea is to also use Git Flow to release new blog posts.

My blog is generated by Pretzel, which is a Jekyll clone written in C#. I host it on an Azure Web Site. I write my posts in GitHub style Markdown files and push them to GitHub. Every time I push a new post, the Azure Web Site starts a new build and runs Pretzel to generate the static blog.

I already wrote some blog posts about using Pretzel

Dear hackers and bots: Because it is a static web site, it doesn't make sense to search for WordPress login pages or WordPress APIs on my blog ;)

Since almost one and a halve year now, I use Git on GitHub to publish my posts, but I only use the master branch yet and I don't commit the drafts. I sometimes write two or three post in parallel and I anyway have around ten drafts in the Pretzels _drafts folder. This feels a bit messy and I will probably loose my drafted posts, if the machine crashes.

Using Git Flow, I don't need to use the _drafts folder anymore. Every unfinished feature branch is a draft. If I finish a post, I would just need to finish the feature branch. If I want to publish the posts, I can start a new release.

Maybe the release step is a bit too much. But currently I need to push explicitly to publish a new post and it shouldn't be a big deal to also create a new release to merge all finished posts from develop to master. If it doesn't fit, it would be easy to switch to the more simple feature branches.

Let's see how it works :-)

This blog post feature branch is created by using the following commands in the console:

git flow feature start blogging-with-git-flow

Now I'm in the draft mode and will create the blog post file, writing, adding images, linking between existing posts and so on. If this is done I do a first commit and publish the feature branch to GitHub:

git add _posts/*
git add img/*
git commit -m "new post about blogging with pretzel"
git flow feature publish

At this state I can change and commit as much as I want to. If I finish the blog post, I'm going to finish the post and push the current develop branch to GitHub:

git flow feature finish
git push

The last step is publishing the posts. I currently not sure, but I could probably use the number of posts as the minor version number, which will also be used as tag for the release:

git flow release start 1.48.0
git flow release finish
git push --all
git push --tags

(I possibly should create a batch command to execute the four lines to release new posts)

After that push, Pretzel will start "baking" the blog including the new blog post.

If I want to see the current drafts, I just need to display the existing branches:

I'm sure this will be a clean way to publish and to handle the drafts and finished posts.

Versioning

While publishing the latest post like this, I realized that GitHub actually will display a release in the GitHub repository, this is quite nice. This is not really needed but a funny fact and a reason why I wanna think a little more about the version numbers, if I release a new article.

My idea is to change the Major version, if I do a huge change on the layout of the blog or if I add a new feature. Because the layout is still the first version and I only did some really small changes, I'll keep it as version 1. For the minor version I will use the number of published articles.

This doesn't really makes sense from the semver perspective, but blogging should also be fun and this really means is fun to me.

This means, with the latest post I published release 1.47.0 and this post will be in release 1.48.0 ;-)

Authoring

In the last post about writing blog posts using Pretzel, I wrote that I use MarkdownPad 2 to write my pots. Unfortunately it seems that this editor is not longer maintained, no new version was published for a while. I paid for it, to have some more extended feature to use. Anyway, a few months ago it crashes every time I opened it and there was no way to get an updated or fixed version. (I'm pretty sure it was related to the installation of CefSharp.) This is why I now use Typora, which is pretty minimalistic and lightweight. It works completely different, but in a pretty cool way. It doesn't use a splitted screen to edit and preview the content. With typora you write markdown code directly and it immediately translated it to the preview in the same editor view. It also supports custom CSS and different Markdown styles. It is real WYSIWYG.

Any thoughts about it? Feel free to drop a comment and let me know :-)

Using xUnit, MSTest or NUnit to test .NET Core libraries

30.03.2017 21:00:00 | Jürgen Gutsch

MSTest was just announced to be open sourced, but was already moved to .NET Core some months ago. It seems it makes sense to write another blog post about unit testing .NET Core applications and .NET Standard libraries using .NET Core tools.

In this post I'm going to use the dotnet CLI and Visual Studio Code completely. Feel free to use Visual Studio 2017 instead, if you want to and if you don't like to use the console. Visual Studio 2017 is using the same dotnet CLI and almost the same commands in the background.

Setup the system under test

Our SUT is a pretty complex class, that helps us a lot to do some basic math operation. This class will be a part of a super awesome library:

namespace SuperAwesomeLibrary
{
  public class MathTools
  {
    public decimal Add(decimal a, decimal b) =>  a + b;
    public decimal Substr(decimal a, decimal b) => a - b;
    public decimal Multiply(decimal a, decimal b) => a * b;
    public decimal Divide(decimal a, decimal b) => a / b;
  }
}

I'm going to add this class to the "SuperAwesomeLibrary" and a solution I recently added like this:

mkdir unit-tests & cd unit-tests
dotnet new sln -n SuperAwesomeLibrary
mkdir SuperAwesomeLibrary & cd SuperAwesomeLibrary
dotnet new classlib
cd ..
dotnet sln add SuperAwesomeLibrary\SuperAwesomeLibrary.csproj

The cool thing about the dotnet CLI is, that you are really able to create Visual Studio solutions (line 2). This wasn't possible with the previous versions. The result is a Visual Studio and MSBuild compatible solution and you can use and build it like any other solution in your continuous integration environment. Line 5 creates a new library, which will be added to the solution in line 7.

After this is done, the following commands will complete the setup, by restoring the NuGet packages and building the solution:

dotnet restore
dotnet build

Adding xUnit tests

The dotnet CLI directly supports to add XUnit tests:

mkdir SuperAwesomeLibrary.Xunit & cd SuperAwesomeLibrary.Xunit
dotnet new xunit
dotnet add reference ..\SuperAwesomeLibrary\SuperAwesomeLibrary.csproj
cd ..
dotnet sln add SuperAwesomeLibrary.Xunit\SuperAwesomeLibrary.Xunit.csproj

This commands are creating the new xUnit test project, adding a reference to the SuperAwesomeLibrary and adding the test project to the solution.

If this was done, I created the xUnit tests for our MathHelper using VSCode:

public class MathToolsTests
{
  [Fact]
  public void AddTest() 
  {
    var sut = new MathTools();
    var result = sut.Add(1M, 2M);
    Assert.True(3M == result);
  }
  [Fact]
  public void SubstrTest() 
  {
    var sut = new MathTools();
    var result = sut.Substr(2M, 1M);
    Assert.True(1M == result);
  }
  [Fact]
  public void MultiplyTest() 
  {
    var sut = new MathTools();
    var result = sut.Multiply(2M, 1M);
    Assert.True(2M == result);
  }
  [Fact]
  public void DivideTest() 
  {
    var sut = new MathTools();
    var result = sut.Divide(2M, 2M);
    Assert.True(1M == result);
  }
}

This should work and you need to call your very best dotnet CLI friends again:

dotnet restore
dotnet build

The cool thing about this commands is, it works in your solution directory and restores the packages of all your solution and it builds all your projects in your solution. You don't need to go threw all of your projects separately.

But if you want to run your tests, you need to call the library or the project directly, if you are not in the project folder:

dotnet test SuperAwesomeLibrary.Xunit\SuperAwesomeLibrary.Xunit.csproj

If you are in the test project folder just call dotnet test without the project file.

This command will run all your unit tests in your project.

Adding MSTest tests

Adding a test library for MSTest works the same way:

mkdir SuperAwesomeLibrary.MsTest & cd SuperAwesomeLibrary.MsTest
dotnet new mstest
dotnet add reference ..\SuperAwesomeLibrary\SuperAwesomeLibrary.csproj
cd ..
dotnet sln add SuperAwesomeLibrary.MsTest\SuperAwesomeLibrary.MsTest.csproj

Event the test class looks almost the same:

[TestClass]
public class MathToolsTests
{
  [TestMethod]
  public void AddTest()
  {
    var sut = new MathTools();
    var result = sut.Add(1M, 2M);
    Assert.IsTrue(3M == result);
  }
  [TestMethod]
  public void SubstrTest()
  {
    var sut = new MathTools();
    var result = sut.Substr(2M, 1M);
    Assert.IsTrue(1M == result);
  }
  [TestMethod]
  public void MultiplyTest()
  {
    var sut = new MathTools();
    var result = sut.Multiply(2M, 1M);
    Assert.IsTrue(2M == result);
  }
  [TestMethod]
  public void DivideTest()
  {
    var sut = new MathTools();
    var result = sut.Divide(2M, 2M);
    Assert.IsTrue(1M == result);
  }
}

And again our favorite commands:

dotnet restore
dotnet build

The command dotnet restore will fail in offline mode, because MSTest is not delivered with the runtime and the default NuGet packages, but xUnit is. This means, it needs to fetch the latest packages from NuGet.org. Kinda weird, isn't it?

The last task to do, is to run the unit tests:

dotnet test SuperAwesomeLibrary.MsTest\SuperAwesomeLibrary.MsTest.csproj

This doesn't really look hard.

What about Nunit?

Unfortunately there is no default template for a Nunit test projects. I really like Nunit and I used it for years. It is anyway possible to use NUnit with .NET Core, but you need to do some things manually. The first steps seem to be pretty similar to the other examples, except we create a console application and add the NUnit dependencies manually:

mkdir SuperAwesomeLibrary.Nunit & cd SuperAwesomeLibrary.Nunit
dotnet new console
dotnet add package Nunit
dotnet add package NUnitLite
dotnet add reference ..\SuperAwesomeLibrary\SuperAwesomeLibrary.csproj
cd ..

The reason why we need to create a console application is, that there is not yet a runner for visual studio available for NUnit. This also means dotnet test will not work. The NUnit Devs are working on it, but this seems to need some more time. Anyway, there is an option to use NUnitLite to create a self executing test library.

We need to use NUnitLight and to change the static void Main to a static int Main:

static int Main(string[] args)
{
  var typeInfo = typeof(Program).GetTypeInfo();
  return new AutoRun(typeInfo.Assembly).Execute(args);
}

This lines automatically executes all TestClasses in the current assembly. It also passes the NUnit arguments to NUnitLite, to e. g. setup the output log file, etc.

Let's add a NUnit test class:

[TestClass]
public class MathToolsTests
{
  [Test]
  public void AddTest() 
  {
    var sut = new MathTools();
    var result = sut.Add(1M, 2M);
    Assert.That(result, Is.EqualTo(3M));
  }
  [Test]
  public void SubstrTest() 
  {
    var sut = new MathTools();
    var result = sut.Substr(2M, 1M);
    Assert.That(result, Is.EqualTo(1M));
  }
  [Test]
  public void MultiplyTest() 
  {
    var sut = new MathTools();
    var result = sut.Multiply(2M, 1M);
    Assert.That(result, Is.EqualTo(2M));
  }
  [Test]
  public void DivideTest() 
  {
    var sut = new MathTools();
    var result = sut.Divide(2M, 2M);
    Assert.That(result, Is.EqualTo(1M));
  }
}

Finally we need to run the tests. But this time we cannot use dotnet test.

dotnet restore
dotnet build
dotnet run -p SuperAwesomeLibrary.Nunit\SuperAwesomeLibrary.Nunit.csproj

Because it is a console application, we need to use dotnet run to execute the app and the NUnitLite test runner.

What about mocking?

Currently creating mocking frameworks is a little bit difficult using .NET Standard, because there is a lot of reflection needed, which is not completely implemented in .NET Core or even .NET Standard.

My Favorite tool Moq is anyway available for .NET Standard 1.3, which means it should work here. Let's see how it works.

Lets assume we have a PersonService in the SuperAwesomeLibrary that uses a IPersonRepository to fetch Persons from a data storage:

using System;
using System.Collections.Generic;

public class PersonService
{
  private readonly IPersonRepository _personRepository;

  public PersonService(IPersonRepository personRepository)
  {
    _personRepository = personRepository;
  }

  public IEnumerable<Person> GetAllPersons()
  {
    return _personRepository.FetchAllPersons();
  }
}

public interface IPersonRepository
{
  IEnumerable<Person> FetchAllPersons();
}

public class Person
{
  public string Firstname { get; set; }
  public string Lastname { get; set; }
  public DateTime DateOfBirth { get; set; }
}

After building the library, I move to the NUnit test project to add Moq and GenFu.

cd SuperAwesomeLibrary.Nunit
dotnet add package moq
dotnet add package genfu
dotnet restore

GenFu is a really great library to create the test data for unit tests or demos. I really like this library and use it a lot.

Now we need to write the actual test using this tools. This test doesn't really makes sense, but it shows how Moq works:

using System;
using System.Linq;
using NUnit.Framework;
using SuperAwesomeLibrary;
using GenFu;
using Moq;

namespace SuperAwesomeLibrary.Xunit
{
  [TestFixture]
  public class PersonServiceTest
  {
    [Test]
    public void GetAllPersons() 
    {
      var persons = A.ListOf<Person>(10); // generating test data using GenFu
      
      var repo = new Mock<IPersonRepository>();
      repo.Setup(x => x.GetAllPersons()).Returns(persons);

      var sut = new PersonService(repo.Object);
      var actual = sut.GetAllPersons();

      Assert.That(actual.Count(), Is.EqualTo(10));
    }
  }
}

As you can see the, Moq works the same was in .NET Core as in the full .NET Framework.

Now let's start the NUnit tests again:

dotnet build
dotnet run

Et voilà:

Conclusion

Running unit tests within .NET Core isn't really a big deal and it is really a good thing that it is working with different unit testing frameworks. You have the choice to use your favorite tools. It would be nice to have the same choice even with the mocking frameworks.

In one of the next post I'll write about unit testing a ASP.NET Core application. Which includes testing MiddleWares, Controllers, Filters and View Components.

You can play around with the code samples on GitHub: https://github.com/juergengutsch/dotnetcore-unit-test-samples/

Regeln | Impressum