.
Anmeldung | Registrieren | Hilfe

Blogger

.NET-Blog Archiv

.NET Developer Blogs

C#: BlockingCollection am Beispiel MetroFtpClient

24.07.2016 17:27:25 | Steffen Steinbrecher

Innerhalb des MetroFtpClients (https://github.com/steve600/MetroFtpClient) gibt es eine Warteschlange um die auszuführenden Up- und Downloads zu verwalten. Bei der Abarbeitung der Warteschlange wünscht man sich nun oft einen gewissen Grad an Parallelität um die Performance zu steigern (z.B. mehrere simultane Downloads). Mit .NET 4.0 hat Microsoft einen großen Schritt in diese Richtung getan und den Entwicklern […]

How to continuously deploy a ASP.​NET Core 1.0 web app to Microsoft Azure

21.07.2016 21:00:00 | Jürgen Gutsch

We started the first real world project with ASP.NET Core RC2 a month ago and we learned a lot of new stuff around ASP.NET Core

  • Continuous Deployment to an Azure Web App
  • Token based authentication with Angular2
  • Setup Angular2 & TypeScript in a ASP.NET Core project
  • Entity Framework Core setup and initial database seeding

In this post, I'm going to show you how we setup a continuous deployment stuff for a ASP.NET Core 1.0 project, without tackling TypeScript and Angular2. Please Remember: The tooling around .NET Core and ASP.NET Core is still in "preview" and will definitely change until RTM. I'll try to keep this post up-to-date. I wont use the direct deployment to an Azure Web App from a git repository because of some reasons, I mentioned in a previous post.

I will write some more lines about the other learned stuff in one of the next posts.

Let's start with the build

Building is the easiest part of the entire deployment process. To build a ASP.NET Core 1.0, solution you are able to use MSBuild.exe. Just pass the solution file to MSBuild and it will build all projects in the solution.

The *.xproj files use specific targets, which will wrap and use the dotnet CLI. You are also able to use the dotnet CLI directly. Just call dotnet build for each project, or just simpler: call dotnet build in the solution folder and the tools will recursively go threw all sub-folders, to look for project.json files and build all the projects in the right build order.

Usually I define an output path to build all the projects into a specific folder. This makes it a lot easier for the next step:

Test the code

Some months ago, I wrote about unit testing DNX libraries (Xunit, NUnit). This didn't really change in .NET Core 1.0. Depending on the Test Framework, a test library could be a console application, which can be called directly. In other cases the test runner is called, which gets the test libraries passed as arguments. We use NUnit to create our unit tests, which doesn't provide a separate runner yet for .NET Core. All of the test libraries are console apps and will build to a .exe file. So we are searching the build output folder for our test libraries and call them one by one. We also pass the test output file name to that libraries, to get detailed test results.

This is pretty much all to run the unit tests.

Throw it to the clouds

Deployment was a little more tricky. But we learned how to do it, from the Visual Studio output. If you do a manual publish with Visual Studio, the output window will tell you how the deployment needs to be done. This are just two steps:

###1. publish to a specific folder using the "dotnet publish" command We are calling dotnet publish with this arguments:

Shell.Exec("dotnet", "publish \"" + webPath + "\" --framework net461 --output \"" + 
    publishFolder + "\" --configuration " + buildConf, ".");
  • webPath contains the path to the web project which needs to be deployed
  • publishFolder is the publish target folder
  • buildConf defines the Debug or Release build (we build with Debug in dev environments)

###2. use msdeploy.exe to publish the complete publish folder to a remote machine. The remote machine in our case, is an instance of an Azure Web App, but could also be any other target machine. msdeploy.exe is not a new tool, but is still working, even with ASP.NET Core 1.0.

So we just need to call msdeploy.exe like this:

Shell.Exec(msdeploy, "-source:contentPath=\"" + publishFolder + "\" -dest:contentPath=" + 
    publishWebName + ",ComputerName=" + computerName + ",UserName=" + username + 
    ",Password=" + publishPassword + ",IncludeAcls='False',AuthType='Basic' -verb:sync -" + 
    "enablerule:AppOffline -enableRule:DoNotDeleteRule -retryAttempts:20",".")
  • msdeploy containes the path to the msdeploy.exe which is usually C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe.
  • publishFolder is the publish target folder from the previous command.
  • publishWebName is the name of the Azure Web App name, which also is the target content path.
  • computername is the name/URL of the remote machine. In our case "https://" + publishWebName + ".scm.azurewebsites.net/msdeploy.axd"
  • username and password are the deployment credentials. the password is hashed, as in the publish profile that you can download from Azure. Just copy paste the hashed password.

conclusion

I didn't mention all the work that needs to be done to prepare the web app. We also use Angular2 with TypeScript. So we also need to get all the NPM dependencies, we need to move the needed files to the wwwroot folder and we need to bundle and to minify all the JavaScript files. This is also done in our build & deployment chain. But in this post, it should be enough to describe just the basic steps for a usual ASP.NET Core 1.0 app.

OpenSource: Vorstellung MetroFtpClient

19.07.2016 12:41:08 | Steffen Steinbrecher

In diesem Beitrag möchte ich mal ein kleines Tool für FTP-Zugriffe vorstellen. Als Ausgangsbasis für den MetroFtpClient (https://github.com/steve600/MetroFtpClient) diente das PrismMahAppsSample (https://github.com/steve600/PrismMahAppsSample) und die Standard .NET-Klassen FtpWebRequest/FtpWebRespsonse. Auch für dieses Projekt wurden wieder einige OpenSource-Projekte verwendet. Hier mal eine Übersicht: Dragablz – https://github.com/ButchersBoy/Dragablz MahApps.Metro – https://github.com/MahApps/MahApps.Metro MaterialDesignInXAMLToolkit – https://github.com/ButchersBoy/MaterialDesignInXamlToolkit Newtonsoft.Json – https://github.com/JamesNK/Newtonsoft.Json OxyPlot – https://github.com/oxyplot/oxyplot […]

Visual Studio Code 1.3 - Tabs, Extensions View und mehr Neuigkeiten

14.07.2016 12:30:29 | Kay Giza

Visual Studio Code (VSCode) hat mir Version 1.3 einige entscheidende Verbesserungen und Highlights erhalten. In diesem Blogeintrag möchte ich einige der Neuigkeiten vorstellen. Eine... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Working with user secrets in ASP.​NET Core applications.

10.07.2016 21:00:00 | Jürgen Gutsch

In the past there was a study about critical data in GitHub projects. They wrote a crawler to find passwords, user names and other secret stuff in projects on GitHub. And they found a lot of such data in public projects, even in projects of huge companies, which should pretty much care about security.

the most of this credentials are stored in config files. For sure, you need to configure the access to a database somewhere, you also need to configure the credentials to storages, mail servers, ftp, what ever. In many cases this credentials are used for development, with lot more rights than the production credentials.

Fact is: Secret information shouldn't be pushed to any public source code repository. Even better: not pushed to any source code repository.

But what is the solution? How should we tell our app where to get this secret information?

On Azure, you are able to configure your settings directly in the application settings of your web app. This overrides the settings of your config file. It doesn't matter if it's a web.config or an appsettings.json.

But we can't do the same on the local development machine. There is no configuration like this. How and where do we save secret credentials?

With .Core, there is something similar now. There is a SecretManager tool, provided by the .NET Core SDK (Microsoft.Extensions.SecretManager.Tools), which you can access with the dotnet CLI.

This tool stores your secrets locally on your machine. This is not a high secure password manager like keypass. It is not really high secure, but on your development machine, it provides the possibility NOT to store your secrets in a config file inside your project. And this is the important thing here.

To use the SecretManager tool, you need to add that tool in the "Tools" section of your project.json, like this:

"Microsoft.Extensions.SecretManager.Tools": {
  "version": "1.0.0-preview2-final",
  "imports": "portable-net45+win8+dnxcore50"
},

Be sure you have a userSecretsId in your project.json. With this ID the SecretManager tool assigns the user secrets to your app:

"userSecretsId": "aspnet-UserSecretDemo-79c563d8-751d-48e5-a5b1-d0ec19e5d2b0",

If you create a new ASP.NET Core project with Visual Studio, the SecretManager tool is already added.

Now you just need to access your secrets inside your app. In a new Visual Studio project, this should also already done and look like this:

public Startup(IHostingEnvironment env)
{
    _hostingEnvironment = env;

    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    if (env.IsDevelopment())
    {
        // For more details on using the user secret store see 
        // http://go.microsoft.com/fwlink/?LinkID=532709
        builder.AddUserSecrets();

        // This will push telemetry data through Application 
        // Insights pipeline faster, allowing you to view results 
        // immediately.
        builder.AddApplicationInsightsSettings(developerMode: true);
    }

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

If not arr a NuGet reference to Microsoft.Extensions.Configuration.UserSecrets 1.0.0 in your project.json and add builder.AddUserSecrets(); as shown here.

The Extension Method AddUserSecrets() loads the secret information of that project into the ConfigurationBuilder. If the keys of the secrets are equal to the keys in the previously defined appsettings.json, the app settings will be overwritten.

If this all is done you are able to use the tool to store new secrets:

dotnet user-secrets set key value

If you create a separate section in your appsettings.config as equal to the existing settings, you need to combine the user secret key with the sections name and the settings name, separated by a colon.

I created settings like this:

"AppSettings": {
    "MySecretKey": "Hallo from AppSettings",
    "MyTopSecretKey": "Hallo from AppSettings"
},

To overwrite the keys with the values from the SecretManager tool, I need to create entries like this:

dotnet user-secrets set AppSettings:MySecretKey "Hello from UserSecretStore"
dotnet user-secrets set AppSettings:MyTopSecretKey "Hello from UserSecretStore"

BTW: to override existing keys with new values, just call set the secret again with the same key and the new value.

This way to handle secret data works pretty fine for me.

The SecretManager tool knows three more commands:

  • dotnet user-secrets clear: removes all secrets from the store
  • dotnet user-secrets list: shows you all existing keys
  • dotnet user-secrets remove <key>: removes the specific key

Just type dotnet user-secrets --help to see more information about the existing commands.

If you need to handle some more secrets in your project, it possibly makes sense to create a small batch file to add the keys, or to share the settings with build and test environments. But never ever push this file to the source code repository ;)

CAKE: Building solutions with C# & Roslyn

09.07.2016 18:15:00 |

x

CAKE - C# Make

  • A DSL for build tasks (e.g. build following projects, copy stuff, deploy stuff etc.)
  • It’s just C# code that gets compiled via Roslyn
  • Active community, OSS & written in C#
  • You can get CAKE via NuGet
  • Before we begin you might want to check out the actual website of CAKE
  • Cross Platform support

Our goal: Building, running tests, package NuGet Packages etc.

I already did a couple of MSBuild and FAKE related blogposts, so if you are interested on these topics as well go ahead (some are quite old, there is a high chance that some pieces might not apply anymore):

Ok… now back to CAKE.

Let’s start with the basics: Building

I created a pretty simple WPF app and followed these instructions.

The build.cake script

My script is a simplified version of this build script:

// ARGUMENTS
var target = Argument("target", "Default");

// TASKS
Task("Restore-NuGet-Packages")
    .Does(() =>
{
    NuGetRestore("CakeExampleWithWpf.sln");
});

Task("Build")
    .IsDependentOn("Restore-NuGet-Packages")
    .Does(() =>
{
      MSBuild("CakeExampleWithWpf.sln", settings =>
        settings.SetConfiguration("Release"));

});

// TASK TARGETS
Task("Default").IsDependentOn("Build");

// EXECUTION
RunTarget(target);

If you know FAKE or MSBuild, this is more or less the same structure. You define tasks, which may depend on other tasks. At the end you invoke one task and the dependency chain will do its work.

Invoke build.cake

The “build.ps1” will invoke “tools/cake.exe” with the input file “build.cake”.

“build.ps1” is just a helper. This Powershell script will download nuget.exe and download the CAKE NuGet-Package and extract it under a /tools folder. If you don’t have problems with binary files in your source control, you don’t need this Powershell script.

Our first CAKE script!

The output is very well formatted and should explain the mechanics behind it good enough:

Time Elapsed 00:00:02.86
Finished executing task: Build

========================================
Default
========================================
Executing task: Default
Finished executing task: Default

Task                          Duration
--------------------------------------------------
Restore-NuGet-Packages        00:00:00.5192250
Build                         00:00:03.1315658
Default                       00:00:00.0113019
--------------------------------------------------
Total:                        00:00:03.6620927

The first steps are pretty easy and it’s much easier than MSBuild and feels good if you know C#.

The super simple intro code can be found on GitHub.

Re-MVPed

08.07.2016 12:06:00 | Jörg Neumann


Mein MVP-Award für den Bereich “Windows Platform Development” wurde wieder um ein Jahr verlängert. Danke Microsoft! 


How web development changed for me over the last 20 years

07.07.2016 21:00:00 | Jürgen Gutsch

The web changed pretty fast within the last 20 years. More and more logic moves from the server side to the client side. More complex JavaScript needs to be written on the client side. And something freaky things happened the last years: JavaScript was moving to the server and Web technology was moving to the desktop. That is nothing new, but who was thinking about that 20 years ago?

The web changed, but also my technology stack. It seems my stack changed back to the roots. 20 years ago, I started with HTML and JavaScript, moving forward to classic ASP using VBScript. In 2001 I started playing around with ASP.NET and VB.NET and used it in in production until the end of 2006. In 2007 I started writing ASP.NET using C#. HTML and JavaScript was still involved, but more or less wrapped in third party controls and jQuery was an alias for JavaScript that time. All about JavaScript was just jQuery. ASP.NET WebForms felled pretty huge and not really flexible, but it worked. Later - in 2010 - I also did many stuff with SilverLight, WinForms, WPF.

ASP.NET MVC came up and the web stuff starts to feel little more naturally again, than ASP.NET WebForms. From an ASP.NET developer perspective, the web changed back to get better, more clean, more flexible, more lightweight and even more naturally.

But there was something new coming up. Things from outside the ASP.NET world. Strong JavaScript libraries, like KnockOut, Backbone and later on Angular and React. The First Single Page Application frameworks (sorry, I don't wanted to mention the crappy ASP.NET Ajax thing...) came up, and the UI logic moves from the server to the client. (Well, we did a pretty cool SPA back in 2005, but we didn't thought about to create a framework out of it.)

NodeJS change the world again, by using JavaScript on the server. You just need two different languages (HTML and JavaScript) to create cool web applications. I didn't really care about NodeJS, except using it in the back, because some tools are based on it. Maybe that was a mistake, who knows... ;)

Now we got ASP.NET Core, which feels a lot more naturally than the classic ASP.NET MVC.

Naturally in this case means, it feels almost the same as writing classic ASP. It means using the stateless web and working with the stateless web, instead of trying to fix it. Working with the Request and Response more directly, than with the classic ASP.NET MVC and even more than in ASP.NET WebForms. It doesn't mean to write the same unstructured, crappy shit than with classic ASP. ;)

Since we got the pretty cool client side JavaScript frameworks and simplified, minimalistic server side frameworks, the server part was reduced to just serve static files and to serve data over RESTish services.

This is the time where it makes sense to have a deeper look into TypeScript. Until now it didn't makes sense to me. I was writing JavaScript for around 20 years, more and less complex scripts, but I never wrote so much JavaScript within a single project, than as I started using AngularJS last years. Angular2 also was the reason to have a deep look into TypeScript, 'cause now it is completely written in Typescript. And it makes absolutely sense to use it.

A few weeks ago I started the first real NodeJS project. A desktop application which uses NodeJS to provide a high flexible scripting run-time for the users. NodeJS provides the functionality and the UI to the users. All written in TypeScript, instead of plain JavaScript. Why? Because TypeScript has a lot of unexpected benefits:

  • You are still able to write JavaScript ;)
  • It helps you to write small modules and structured code
  • it helps you to write NodeJS compatible modules
  • In general you don't need to write all the JavaScript overhead code for every module
  • You will just focus on the features you need to write

This is why TypeScript got a great benefit to me. Sure a typed language is also useful in many cases, but - working with JS for 20 years - I also like the flexibility of the implicit typed JavaScript and I'm pretty familiar with it. that means, from my perspective the Good thing about TypeScript is, I am still able to write implicit typed code in TypeScript and to use the flexibility of JavaScript. This is why I wrote "You are still able to write JavaScript"

The web technology changed, my technology stack changed and the tooling changed. All the stuff goes more lightweight, even the tools. The console comes back and the IDEs changed back to the roots: Just being text editors with some benefits like syntax highlighting and IntelliSense. Currently I prefer to use the "Swiss army knife" Visual Studio Code or Adobe Brackets, depending on the type of project. Both are starting pretty fast and include nice features.

Using that light weight IDEs is pure fun. Everything is fast, because the machines resource could be used by the apps I need to develop, instead by the IDE I need to use to develop the apps. This makes development a lot faster.

Starting the IDE today means, starting cmder (my favorite console on windows). changing to the project folder, starting a console command to watch the typescript files, to compile after save. Starting another console to use the tools like NPM, gulp, typings, dotnet CLI, NodeJS, and so on. Starting my favorite light weight editor to write some code. :)

Using coroutines to create tutorials in Unity 3D

07.07.2016 01:59:00 | Daniel Springwald

When reading the first time about unitys co-routine concept I thought by myself: How can this be useful?

In the meantime I found out that co-routines are one of the most interesting features of unity 3d.

For my actual game project I use them for several purposes; the most useful is to control interactive level tutorials.

The game contains an advisor avatar:

It guides the player through the level and automatically appears when a milestone is reached and the next hint is needed.

Co-routines are the perfect tool to manage this kind of tutorial.

  1. Create a co-routine and run it when the level starts.
  2. If a initial introduction is needed, this should be the first command inside the co-routine.
  3. Create a WHILE loop which waits till the next milestone event happens and contains a “yield return null;”.
  4. What a milestone is depends on the kind of game you are working on. For my actual game project these are “opening a dialogue”, “selecting a specific object” or “reaching a special place in the level”.
  5. When the condition of the milestone becomes TRUE, the WHILE loop will exit. In my game the next command after the WHILE loop invokes the advisor popup to explain the next step.
  6. Then the next WHILE loop to wait for the next milestone follows – and so on. 

Here is an example how such a tutorial code could look like:

    protected IEnumerator HintPlayback(int moneyToAdd, int itemsToBuy)
    {
        yield return new WaitForSeconds(4);

        ShowMessage("Please look at the pending tasks.");

        while (!this.tasks.AreOpen) yield return null;

        ShowMessage(string.Format("Please add some money - at least {0}$.", moneyToAdd));

        while (!this.money < moneyToAdd) yield return null;

        ShowMessage(string.Format("Perfect. Now please buy at least {0} items.", itemsToBuy));

        while (!this.items.count < itemsToBuy) yield return null;

        ShowMessage("You have completed the tutorial.");

        yield break;
    }

 

You can also skip one ore more hints if milestones are skipped: just check for both conditions (for milestone 1 and 2)  in the WHILE loop of milestone 1.

Writing blog posts using Pretzel

05.07.2016 21:00:00 | Jürgen Gutsch

Until yet I wrote more than 30 blog posts with Pretzel and it works pretty well. From my current perspective it was a good decision, to do this huge change, to move to that pretty cool and lightweight system.

I'm using MarkdownPad 2 to write the posts. Writing goes much easier. The process is now simplified and publishing is almost automated. I also added my blog CSS to that editor to have a nice preview.

The process of writing and publishing new posts goes like this:

  1. Creating a new draft article and save it in the _drafts folder
  2. Working on that draft
  3. Move the finished article to the _posts folder
  4. Commit and push that post to GitHub
  5. Around 30 seconds later the post is published on Azure

This process allows me to write offline in the train, while traveling to the Office in Basel. This is the most important thing to me.

The other big change, was switching to English. I now get more readers and feedback from around the world. Now the most readers are from the US, UK, India and Russia. But also from the other European countries, Australia, Middle East (and Cluj in Romania).

Maybe I lost some readers from the German speaking Area (Germany, Switzerland and Austria) who liked to read my posts in German (I need to find a good translation service to integrate) and I got some more from around the world.

Writing feels good in both, English and in the MarkdownPad :) From my perspective it was a good decision to change the blog system and even the language.

to learn more about Pretzel, have look into my previous post about using pretzel.

How to continuously deploy a ASP.​NET Core 1.0 web app to Microsoft Azure

03.07.2016 21:00:00 | Jürgen Gutsch

We started the first real world project with ASP.NET Core RC2 a month ago and we learned a lot of new stuff around ASP.NET Core

  • Continuous Deployment to an Azure Web App
  • Token based authentication with Angular2
  • Setup Angular2 & TypeScript in a ASP.NET Core project
  • Entity Framework Core setup and initial database seeding

In this post, I'm going to show you how we setup a continuous deployment stuff for a ASP.NET Core 1.0 project, without tackling TypeScript and Angular2. Please Remember: The tooling around .NET Core and ASP.NET Core is still in "preview" and will definitely change until RTM. I'll try to keep this post up-to-date. I wont use the direct deployment to an Azure Web App from a git repository because of some reasons, I [mentioned in a previous post] .

I will write some more lines about the other learned stuff in one of the next posts.

Let's start with the build

Building is the easiest part of the entire deployment process. To build a ASP.NET Core 1.0, solution you are able to use MSBuild.exe. Just pass the solution file to MSBuild and it will build all projects in the solution.

The *.xproj files use specific targets, which will wrap and use the dotnet CLI. You are also able to use the dotnet CLI directly. Just call dotnet build for each project, or just simpler: call dotnet build in the solution folder and the tools will recursively go threw all sub-folders, to look for project.json files and build all the projects in the right build order.

Usually I define an output path to build all the projects into a specific folder. This makes it a lot easier for the next step:

Test the code

Some months ago, I wrote about unit testing DNX libraries (Xunit, NUnit). This didn't really change in .NET Core 1.0. Depending on the Test Framework, a test library could be a console application, which can be called directly. In other cases the test runner is called, which gets the test libraries passed as arguments. We use NUnit to create our unit tests, which doesn't provide a separate runner yet for .NET Core. All of the test libraries are console apps and will build to a .exe file. So we are searching the build output folder for our test libraries and call them one by one. We also pass the test output file name to that libraries, to get detailed test results.

This is pretty much all to run the unit tests.

Throw it to the clouds

Deployment was a little more tricky. But we learned how to do it, from the Visual Studio output. If you do a manual publish with Visual Studio, the output window will tell you how the deployment needs to be done. This are just two steps:

###1. publish to a specific folder using the "dotnet publish" command We are calling dotnet publish with this arguments:

Shell.Exec("dotnet", "publish \"" + webPath + "\" --framework net461 --output \"" + 
    publishFolder + "\" --configuration " + buildConf, ".");
  • webPath contains the path to the web project which needs to be deployed
  • publishFolder is the publish target folder
  • buildConf defines the Debug or Release build (we build with Debug in dev environments)

###2. use msdeploy.exe to publish the complete publish folder to a remote machine. The remote machine in our case, is an instance of an Azure Web App, but could also be any other target machine. msdeploy.exe is not a new tool, but is still working, even with ASP.NET Core 1.0.

So we just need to call msdeploy.exe like this:

Shell.Exec(msdeploy, "-source:contentPath=\"" + publishFolder + "\" -dest:contentPath=" + 
    publishWebName + ",ComputerName=" + computerName + ",UserName=" + username + 
    ",Password=" + publishPassword + ",IncludeAcls='False',AuthType='Basic' -verb:sync -" + 
    "enablerule:AppOffline -enableRule:DoNotDeleteRule -retryAttempts:20",".")
  • msdeploy containes the path to the msdeploy.exe which is usually C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe.
  • publishFolder is the publish target folder from the previous command.
  • publishWebName is the name of the Azure Web App name, which also is the target content path.
  • computername is the name/URL of the remote machine. In our case "https://" + publishWebName + ".scm.azurewebsites.net/msdeploy.axd"
  • username and password are the deployment credentials. the password is hashed, as in the publish profile that you can download from Azure. Just copy paste the hashed password.

conclusion

I didn't mention all the work that needs to be done to prepare the web app. We also use Angular2 with TypeScript. So we also need to get all the NPM dependencies, we need to move the needed files to the wwwroot folder and we need to bundle and to minify all the JavaScript files. This is also done in our build & deployment chain. But in this post, it should be enough to describe just the basic steps for a usual ASP.NET Core 1.0 app.

.NET Core 1.0 RTM and ASP.​NET Core 1.0 RTM was announced

27.06.2016 21:00:00 | Jürgen Gutsch

Finally we get .NET Core 1.0 RTM and ASP.​NET Core 1.0 RTM. Yesterday Microsoft announces the release of .NET Core 1.0 and ASP.​NET Core 1.0.

Scott Hanselman posted a great summery about it: .NET Core 1.0 is now released! You'll find more detailed information about .NET Core 1.0 on the .NET Blog in the Post "Announcing .NET Core 1.0" and pretty much detailed information about ASP.​NET Core 1.0 in the .NET Web Development and Tools Blog in the post "Announcing ASP.NET Core 1.0"

Updating exiting .NET Core RC applications to the RTM, needs some attention. (Not as much as from RC1 to RC2, but there is a little bit to do). First of all: The Visual Studio 2015 Update 3 is needed, as pretty much mentioned in all of the Blog posts. To learn more about the need things to do, Rick Strahl posted a great and pretty detailed post about updating an existing application: Upgrading to ASP.NET Core RTM from RC2

Welche Screen Capture Software verwende ich?

27.06.2016 18:31:54 | Kay Giza

Ich werde von Zeit zu Zeit immer wieder gefragt, welche Tools ich für meine Präsentationen bei Vorträgen oder für meine Blogpostings benutze; so zuletzt auch auf der Developer Week. Dies nehme ich mal als Anlass dies zu skizzieren, insbesondere für alle Tekkis oder Menschen, die täglich dies brauchen könnten. Um es kurz zu machen, ich nutze generell... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

PDF-Download: Microsoft Azure verstehen - ein Leitfaden fuer Entwickler

22.06.2016 11:41:24 | Kay Giza

Das PDF-Dokument ist kostenfrei und ohne Registrierung frei erhältlich. Unter dem Titel 'Azure verstehen - ein Leitfaden für Entwickler' hat Microsoft ein rund 40 Seiten langes deutschsprachiges PDF veröffentlicht. Der Leitfaden beschreibt das Warum und Wie von Microsoft Azure Szenarien... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Visual Studio 2015: Mole Visual Studio Debugger/Visualizer

18.06.2016 12:37:48 | Steffen Steinbrecher

Mole ist ein alternativer Visualisierer für Visual Studio zum detaillierten Inspizieren von .NET-Anwendungen. Während einer Debugging-Session erlauben Visualisierer das Betrachten von UI- und Datenobjekten. Somit kann man sich z.B. den VisualTree einer WPF-Awendung direkt im Debugger anschauen, ohne auf zusätzliche Tools zurückgreifen zu müssen. Darüber hinaus unterstützt Mole das Suchen und Editieren von Eigenschaften und […]

Tipps: Nützliche Community-Projekte aus dem .NET-Umfeld

17.06.2016 17:07:31 | Steffen Steinbrecher

In diesem Beitrag möchte ich mal einige Community-Projekte vorstellen. Gerade in den letzten Jahren ist die Open Source Community rasant gewachsen. Das zeigt sich auch schon an den diversen Plattformen: Angefangen beim Klassiker SourceForge über CodePlex von Microsoft bis hin zu GitHub. Auf jeder einzelnen Plattform sind tausende Projekte gehostet und da kann eine Suche […]

Nachlese zur SharePoint User Group im Martini Club

15.06.2016 10:10:41 | Sebastian Gerling

Gestern hat sich die geneigte SharePoint Community im Martini Club in München versammelt um über SharePoint zu diskutieren. Samuel Zürcher (https://sharepointszu.com/) hat einen interessanten Vortrag zum Thema SharePoint 2016 Hybrid gehalten und hat uns auch an seiner Einschätzung zur Maturity der einzelnen Ansätze teilhaben lassen. Bei Fingerfood haben wir dann das Event entsprechend ausklingen lassen. Ich […]

SharePoint User Group HEUTE

14.06.2016 10:54:22 | Sebastian Gerling

Heute findet die nächste SharePoint User Group München statt. Das Treffen wird am 14.06.2016 um 18:30 im Martini Club in der Theresienstraße 93. Wir haben folgenden Vortrag: SharePoint 2016 Hybrid Abstract: Samuel Zürcher gibt in seinem Vortrag einen Überblick über die wichtigsten neuen Hybridfeatures in SharePoint 2016, was sich dahinter verbirgt und wie man sie am […]

FAKE: Build ASP.NET projects with web.config transformation (and without knowing a tiny bit of F#)

12.06.2016 18:00:00 |

This is a follow-up to my other FAKE posts:

What’s the difference between a ASP.NET and other projects?

The most obvious difference is that the output is a bunch of dlls and content files. Additionally you might have a web.debug.config or web.release.config in your source folder.

Both files are important, because they are used during a Visual-Studio build as a Web.Config Transformation.

With a normal build the transformation will not kick in, so we need a way to trigger the transformation “manually”.

Project Overview

The sample project consists of one ASP.NET project and the .fsx file.

x

The “released” web.config should cover this 3 main transformation parts:

  • DefaultConnectionString to ‘ReleaseSQLServer’
  • No “debug”-attribute on system.web
  • developmentMode-AppSetting set to ‘true’

Web.Release.config

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <connectionStrings>
    <add name="DefaultConnection"
      connectionString="ReleaseSQLServer"
      xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
  </connectionStrings>

  <appSettings>
    <add key="developmentMode" value="true" xdt:Transform="SetAttributes"
         xdt:Locator="Match(key)"/>
  </appSettings>
  
  <system.web>
    <compilation xdt:Transform="RemoveAttributes(debug)" />
  </system.web>
</configuration>

The FAKE script

We reuse the MSBuild-Helper from FAKE and inject a couple of “Publish”-related stuff, which will trigger the transformation.

A few remarks: In the “normal” WebDeploy-World you would have a PublishProfile and it would end up with a .zip-file and a couple of other files that fill in parameters like the ConnectionString. With this MSBuild command I mimik a part of this behavior and use the temporary output as our main artifact. In my most apps I use web.config transformations only for “easy” stuff (e.g. remove the debug attribute) - if you are doing fancy stuff and the output is not what you want, please let me know.

This MSBuild command should apply all your web.config transformations.

Publish a ASP.NET project

...
Target "BuildWebApp" (fun _ ->
trace "Building WebHosted Connect..."
!! "**/*.csproj"
 |> MSBuild artifactsBuildDir "Package"
    ["Configuration", "Release"
     "Platform", "AnyCPU"
     "AutoParameterizationWebConfigConnectionStrings", "False"
     "_PackageTempDir", (@"..\" + artifactsDir + @"Release-Ready-WebApp")
     ]
 |> Log "AppBuild-Output: "
)
...

“AutoParameterizationWebConfigConnectionStrings” or how to get rid of $(ReplacableToken_…

Blogpost updated on 2016-07-18

A friend told me that his transformed web.config contained “$(ReplaceableToken_…)” strings. It seems that “connectionStrings” are treated specially. If you have a connectionString in your web.config and don’t set “AutoParameterizationWebConfigConnectionStrings=False” you will get something like that:

<connectionStrings>
  <!-- Not the result we are looking for :-/ -->
  <add name="DefaultConnection" connectionString="$(ReplacableToken_DefaultConnection-Web.config Connection String_0)" providerName="System.Data.SqlClient" />
</connectionStrings>

I would say this is not the result you are expecting. With the “AutoParameterizationWebConfigConnectionStrings=False” parameter it should either do a transformation or leave the default-connectionString value in the result.

Thanks to Timur Zanagar! I completely missed this issue.

Result

x

This build will produce two artifacts - the build-folder just contains the normal build output, but without a web.config transformation.

The other folder contains a ready to deploy web application, with the web.release.config applied.

<connectionStrings>
  <add name="DefaultConnection" connectionString="ReleaseSQLServer" providerName="System.Data.SqlClient" />
</connectionStrings>
<appSettings>
  ...
  <add key="developmentMode" value="true" />
</appSettings>
<system.web>
  ...
</system.web>

You can find the complete sample & build script on GitHub.

NDepend: Abfragen und Code-Regeln mit CQLinq

12.06.2016 10:58:30 | Steffen Steinbrecher

Im ersten Beitrag NDepend: Tool zur statischen Code-Analyse wurden einige Grundlagen des Tools NDepend beschrieben. NDepend ist ein Tool zur statischen Code-Analyse. Dabei analysiert NDepend den Quellcode auf Basis verschiedener Abfragen (z.B. Lines of Code (LOC) oder die Anzahl von Methoden innerhalb einer Klasse) und Code-Regeln. Mit Hilfe von Code-Regeln lassen sich definierte Eigenschaften von […]

Compiling UDF with ANSYS 16 and Visual Studio 2015

07.06.2016 03:59:00 | Jan-Cornelius Molnar

Compiling UDF with ANSYS 16 and Visual Studio 2015 To compile User Defined Functions (UDF) with ANSYS you need to install a C++ compiler. ANSYS recommends Visual C++ which is freely available in form of Visual Studio Community . In this article I will...(read more)

Dynamisch erzeugte Json Daten gemeinsam mit einem Model an einen Controller übertragen

07.06.2016 00:33:33 | Hendrik Loesch

Da ich aktuell fröhlich ASP.MVC nutzen darf, bin ich über ein Problem gestoßen, zu dem ich nur bedingt hilfreiche Unterstützung im Web gefunden habe. Genauer geht es darum, dass ich mir in meiner View, mit Javascript ein Json-Objekt zusammen baue und dieses dann über ein Formular an den Controller übertragen werden muss. Zu Darstellungszwecken möchte […]

Greift zu! Die Microsoft Virtual Academy (MVA) hat jetzt einen Embed-Player

06.06.2016 18:56:03 | Kay Giza

Seit einigen Wochen besteht jetzt endlich auch die Möglichkeit, einzelne Kurse oder ganze Lektionen auf Dritten Webseiten via embed Player einzubinden. Beispielsweise um auf Inhalte aufmerksam zu machen oder gar Interessierten komplette Lernfade zu empfehlen. Ob auf der eigenen Webseite, in Foren, auf Blogs oder Weiterbildungs-Portalen. Wie das geht? ... [... mehr in diesem Blogeintrag auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Slides von der XPC

05.06.2016 13:57:00 | Jörg Neumann

Die XPC war echt der Hammer! Hier die Slides meiner Session:

Die Videos der Sessions gibt's hier.

Visual Studio 2015: Productivity Power Tools 2015

04.06.2016 18:08:37 | Steffen Steinbrecher

Bei den Productivity Power Tools handelt es sich um eine Sammlung von Werkzeugen, die (noch) nicht zum Funktionsumfang von Visual Studio gehören, sich dann aber nachträglich separat installieren lassen und oft in einem späteren Release von Visual Studio Einzug halten. Seit März 2016 sind die Productivity Power Tools OpenSource und der Quellcode steht via GitHub […]

Regeln | Impressum