.
Anmeldung | Registrieren | Hilfe

.NET-Blogs Archiv Januar 2016

Liste der Visual Studio Code Extensions und Themes - Januar 2016 Report

28.01.2016 17:50:06 | Kay Giza

Ich habe eine Liste für Visual Studio Code erstellt, mit allen vorhandenen Extensions Stand heute 28.01.2016. Natürlich kann man auch im Visual Studio Marketplace schauen oder in Visual Studio Code via F1 und ext install und sich die nützlichsten Extensions heraussuchen. Da es oben genannten Möglichkeiten zur Extension-Suche gibt, habe ich mir hier die Kurzbeschreibungen zu den Extensionen gespart und nur eine Liste erstellt, viel Spaß beim Durchstöbern! Nächste Liste gibt es Ende Februar wieder! [... mehr in diesem Blogartikel auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Playing around with GenFu

26.01.2016 17:00:00 | Jürgen Gutsch

In the past I used NBuilder (by Gareth Down) to create test data for my unit tests, demos and UI mock-ups. I really liked NBuilder, I used it for many years and I wrote about it in my old blog (ger) and in the dotnetpro (a German .NET magazine)

Unfortunately NBuilder is not compatible with .NET Core and there was no new release since 2011. Currently I play around with ASP.NET 5 and .NET Core, so compatibility to .NET Core and the latest dotnet platform standard is needed.

Good I attended the MVP Summit 2015 and the Hackaton at the last day, because I heard about GenFu, written by James Chambers, David Paquette and Simon Timms. They used that Hackathon to move this library to .NET Core. I did the same with LightCore at the same event.

GenFu also was a test data generator with some more features than NBuilder. GenFu includes some random data generators to create real looking data.

"GenFu is a library you can use to generate realistic test data. It is composed of several property fillers that can populate commonly named properties through reflection using an internal database of values or randomly created data. You can override any of the fillers and give GenFu hints on how to fill properties."

PM> Install-Package GenFu

To learn more about GenFu, I need to play around with it. I did this by writing a small ASP.NET 5 application which shows us user groups and their meetings and speakers and their topics. I also pushed that application to GitHub. So let me show what I found while playing around:

Setup the project

I created a new ASP.NET Core 1 web application (without the authentication stuff) and added "GenFu": "1.0.4" to the dependencies in the project.json.

After that I created a set of types like UserGroup, Lead, Meeting, Speaker and so on.

E. g. the UserGroup looks like this:

public class UserGroup
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public string Description { get; set; }
    public IEnumerable<Leader> Leaders { get; set; }
    public DateTime Founded { get; set; }
    public int Members { get; set; }
    public IEnumerable<Meeting> Meetings { get; set; }
}

Using GenFu

Let's start creating a List of user groups to show on the start page. Like NBuilder, GenFu is using a fluent API to create the a single instance or a list of a specific type:

var userGroup = A.New<UserGroup>();

var usergroups = A.ListOf<UserGroup>(20);

The second line of code generates a List of 20 user groups. The DateTime, Guid and String properties are already filled with randomly created values.

What I want to have is a list with some more real looking data. The good thing about GenFu is, it already includes some sample data and a pretty cool fluent API to configure the types:

A.Configure<UserGroup>()
    .Fill(x => x.Members).WithinRange(10, 250)
    .Fill(x => x.Name).AsMusicGenreName()
    .Fill(x => x.Description).AsMusicGenreDescription()
    .Fill(x => x.Founded).AsPastDate();

The configuration needs to be done before retrieving the list or the single object. The result now is much better than before:

We now have a list of music genre user groups :)

To fill the properties Leaders and Meetings, I created lists before the configuration of the UserGroup and created a extension method on IEnumerable to get a almost random list out of the source list:

var leaders = A.ListOf<Leader>(20);
var meetings = A.ListOf<Meeting>(100);

A.Configure<UserGroup>()
    .Fill(x => x.Members).WithinRange(10, 250)
    .Fill(x => x.Name).AsMusicGenreName()
    .Fill(x => x.Description).AsMusicGenreDescription()
    .Fill(x => x.Founded).AsPastDate()
    .Fill(x => x.Leaders, leaders.GetRandom(1, 4))
    .Fill(x => x.Meetings, meetings.GetRandom(20,100));
            
var usergroups = A.ListOf<UserGroup>(20);

Now we can start to create the leaders, speakers, meetings in the same way to get the full set of data. E. g. to get a list of speakers we can also use same methods as to generate the user groups:

var speakers = A.ListOf<Speaker>(20);

But wait! Did I really configure the Speakers?

I did not!

I just created the list, but I get well looking names, twitter handles, email addresses and I get a nice phone number. Only the website, the description and the topics list are not well configured. Sure, the names, twitter handles and email addresses don't match, but this is not really important.

This is another pretty cool feature of GenFu. Depending of the property name, it finds the right thing called Filler. We are able to configure the speakers, to assign the Filler we want to have, but in many cases GenFu is able to find the right one, without any configuration.

Just type A.Defaults or GenFu.Defaults to get a list of constants to see what data are already included in GenFu.

Lets extend GenFu to create our own Filler to generate random website addresses. A small look into the EmailFiller shows me how easy it is to create our own PropertyFiller. A string based PropertyFiller can inherit base functionality from the PropertyFiller:

public class WebAddressFiller : PropertyFiller<string>
{
    public WebAddressFiller()
        : base(
                new[] { "object" },
                new[] { "website", "web", "webaddress" })
    {
    }

    public override object GetValue(object instance)
    {
        var domain = Domains.DomainName();

        return $"https://www.{domain}";
    }
}

The first argument we pass into the base constructor is a list of type names of the objects we want to fill. "object" in this case means any kind of type based on Object. In GenFu there are different Fillers to fill the property title, because a person title is a different thing than an article title. Like this you can create different fillers for the same property name.

The second argument are the property names to fill.

In the method GetValue we can generate the value and return them back. Because there already is a EmailFiller which generates domain names too, I reuse the ValueGenerator DomainName to get a random domain name out of GenFus resources.

No we need to register the new Filler to GenFu and to use it:

A.Default().FillerManager.RegisterFiller(new WebAddressFiller());
var speakers = A.ListOf<Speaker>(20);

The result is as expect. We get well formed web addresses:

That was pretty easy with only a few lines of code :)

In one the first snippets at the beginning of this post, I created an extension method to create a random length list out of a source list. Wouldn't it be better, if we could create a ListFiller to do that automatically? There is already a configuration extension for list properties called WithRandom, but this thing whats to have a list of lists to select a list out of it randomly. I would like to have it a little more different. I would like to have an extension method, where I pass the source list and a min and a max count of list entries:

public static GenFuConfigurator<TType> AsRandom<TType, TData>(
    this GenFuComplexPropertyConfigurator<TType, IEnumerable<TData>> configurator,
    IEnumerable<TData> data, int min, int max)
    where TType : new()
{
    configurator.Maggie.RegisterFiller(
        new CustomFiller<IEnumerable<TData>>(
            configurator.PropertyInfo.Name, typeof(TType),
            () => data.GetRandom(min, max)));

    return configurator;
}

This isn't really a Filler. This is an ExtensionMethod on the GenFuComplexPropertyConfiguration which registers a CustomFilleer to get random data out of the source list. As you can see, I reused the initially created extension method to generate the random lists, but I needed to modify that extension method to use the randomizer of GenFu instead of a separate one:

private static IEnumerable<T> GetRandom<T>(this IEnumerable<T> source, int min, int max)
{
    var length = source.Count();
    var index = A.Random.Next(0, length - 1);
    var count = A.Random.Next(min, max);

    return source.Skip(index).Take(count);
}

I also made this method private because of the dependency to GenFu.

Now I can use this method in the GenFu configuration of the UserGroup to randomly fill the leaders and the meetings of a user group:

var leaders = A.ListOf<Leader>(20);
var meetings = A.ListOf<Meeting>(100);

A.Configure<UserGroup>()
    .Fill(x => x.Members).WithinRange(10, 250)
    .Fill(x => x.Name).AsMusicGenreName()
    .Fill(x => x.Description).AsMusicGenreDescription()
    .Fill(x => x.Founded).AsPastDate()
    .Fill(x => x.Leaders).AsRandom(leaders, 1, 4)
    .Fill(x => x.Meetings).AsRandom(meetings, 5, 100);

This is not really much code to automatically generate test data for your test or the dummy data of your mock-up. Just a bit of configuration which can be placed somewhere in a central place.

I think ...

... GenFu becomes my favorite library to create test and dummy data. I like the way GenFu generates well looking random dummy data. GenFu is really easy to use and to extend.

BTW: You'll find the small play around application on GitHub: https://github.com/JuergenGutsch/GenFuUserGroups/

Microsoft auf der CeBIT 2016

22.01.2016 12:43:19 | Mathias Gronau

Wie in jedem Jahr findet auch in diesem März die CeBIT statt. Wie in jedem Jahr steht die Messe unter einem Motto. Diesmal steht die Veranstaltung unter dem gleichen Motto wie im letzten Jahr – d!conomy -, allerdings durch die drei Schlagworte join, create, succeed ergänzt. Microsoft hat sich dem angepasst und stellt seinen Auftritt wie im letzten Jahr unter das Motto „Digitales Wirtschaftswunden“. Also alles wie gehabt? Nein. Microsofts Hauptstand, der sich traditionell in der Halle 4 befindet, wurde vollkommen neugestaltet, sowohl inhaltlich als auch vom Aufbau her. Rein von den Quadratmetern her ist der Stand in diesem Jahr kleiner. Dafür sind dort nicht wie in der Vergangenheit 60 bis 65 Partnern mit eigenen Ständen vertreten, sondern es werden lediglich 10 bis 15 Partner auf dem Hauptstand vertreten sein. Welche Partner dies sein werden steht im Moment noch nicht fest, da der Auswahlprozess in vollem Gange ist. Zusätzlich werden noch zwei Kunden von Microsoft auf dem Stand vertreten sein, die keine Partner sind. So wird Bayer 04 Leverkusen vorstellen, wie die IT den Fußballverein erfolgreich macht. Trotz des kleineren Standes ist der Hauptstand als eine große Kommunikationsfläche geplant, auf der die Partner und Kunden in Showcases zeigen, wie sich Unternehmen in der Gegenwart digital für die Zukunft fitmachen können. Jeder der Kunden und Partner bringt einen Showcase vor. Diese Praxisbeispiele sollen den Start in die Zukunft demonstrieren. Aber auch die Kunden der Partner, die auf dem Hauptstand keinen Platz bekommen, bleiben nicht im Regen stehen. Erstmals finden sich in diesem Jahr auf der CeBIT Microsoft-Partnerstände auch in anderen Bereichen. Auf dem Gemeinschaftsstand des VOI in Halle 3 dreht sich alles um Sharepoint. Die Partner auf dem Gemeinschaftsstand in Halle 13 bieten eine Mischung aus dem Internet der Dinge, Unified Communication & Collabration (UCC) und Skype for Business.

Playlist erstellen für PlayPoster von OOSM

20.01.2016 08:01:00 | Martin Hey

Nachdem ich in meinem letzten Post geschrieben habe, wie man PlayPoster in Betrieb nimmt, soll es heute darum gehen, meine Anwendungsfälle mal abzubilden. Dazu erstelle ich im Admin-Portal eine Playlist und befülle diese mit Inhalten.

Das Admin Portal kommt sehr übersichtlich daher und alle wichtigen Menüpunkte sind per Direktlink verfügbar. Nun ein paar Worte zur Struktur: Ein PlayPoster Stick (= Unit) gehört zu einer Gruppe (= Location). Einer Location und damit allen darin enthaltenen Devices können Playlists zugeordnet werden, die entweder ganztägig, morgens, mittags oder abends aktiv sein können. Eine Playlist selbst ist eine Reihenfolge von Bausteinen, die im Editor erstellt oder bearbeitet werden können. In einigen Bausteinen werden Bilder benötigt (oder sind zumindest sinnvoll). Diese liegen unter Files.
Logo Baustein erzeugen
Ein recht simpler Baustein ist der Bildbaustein in dem einfach eine bestimmte Zeit lang ein Bild angezeigt wird. Dieser Baustein ist auch bereits in der Standard-Playlist mit der das Gerät ausgeliefert wird enthalten. Wie erzeugt man nun selbst einen solchen Baustein? Ganz einfach: Dazu legt man einfach in der Menüleiste ein Bild per Drag&Drop ab und definiert im anschließend erscheinenden Dialog noch die Anzeigedauer (diese ist immer im Format mm:ss).

Externe Webseite einbinden
In meinem vorigen Post habe ich ja als Use-Case angegeben, dass ich den Ticketstatus (bei uns aus dem Redmine) und den Status des Build- und Testsystems (bei uns Jenkins) anzeigen will. Dafür gibt es natürlich keine vorgefertigten Bausteine. Aber wozu bin ich Entwickler? Ich kann ja auf einem unserer Rechner noch einen Webserver laufen lassen, der die Daten nach meinen Wünschen aggregiert und anzeigt. Diese Webseite kann ich dann anzeigen lassen. Dazu erstelle ich unter Editor einfach einen neuen Baustein, weise das Template "External Page" zu und gebe die Url an, die angezeigt werden soll und auch wieder die Anzeigedauer. Die Url muss aus Sicht des Sticks angegeben werden - ich kann also auch lokale Urls verwenden. Die Vorschau im Browser funktioniert technisch bedingt nicht, wenn ich mich beim editieren nicht im gleichen Netz wie der Stick befinde und die Webseite für mich dann nicht erreichbar ist oder wenn meine Url eine http-Url ist (da das Portal https ist und der Browser unsichere Elemente in blockiert). Auf dem Device funktioniert es dann aber.

Tweets anzeigen
Ähnlich einfach ist die Anzeige von Tweets. Dazu verwendet man den Baustein "Twitter" und gibt den Hashtag oder die Person an der man folgen will. Innerhalb der definierten Anzeigedauer wird dann alle paar Sekunden durch die so gefilterten Tweets gewechselt. Da sowohl Schriftfarbe als auch Standard-Hintergrund weiß sind, ist es bei diesem Baustein wichtig, dass man im Editor das Hintergrundbild definiert. Dazu lädt man einfach im Vorab über Files ein Bild in der Hintergrundfarbe seiner Wahl hoch und setzt es im Baustein als Hintergrund.

Statische Texte
Zur Anzeige statischer Texte gibt es verschiedene Templates. Ich verwende dazu in der Regel eines der "Pricelist" Templates, da ich oft eine tabellarische Ansicht brauche (z.B. für die Ankündigung von Community Events). 

Playlist erstellen
Nun da ich alle Bausteine erstelle habe, muss ich nur noch eine Playlist aus diesen erstellen. Dazu wechsle ich unter Playlists und ziehe mir per Drag&Drop die Bausteine in der gewünschten Reihenfolge in meine Timeline und speichere die Playlist ab. 
Wenn ich mir noch aussuche wann und wo die Playlist gespielt werden soll, wird sie auch gleich auf die anhand der Location gewählten Devices gepusht und dort direkt angezeigt.

Ein komplettes Tutorial gibt es auf der OOSM Support Seite

ASP.​NET 5 is now ASP.NET Core 1

19.01.2016 17:00:00 | Jürgen Gutsch

Naming seems to be the hardest thing for software developers, even for Microsoft. ;) There where huge discussions about the naming of ASP.NET vNext at the MVP Summit 2015 and around the world. Now they found the right names for completely new things: Yesterday Microsoft reintroduced ASP.NET Core 1.0 and .NET Core 1.0.

Now they have a lot to do to change all the NuGet packages, library names and version numbers. But from my perspective this is a good change, because all this stuff are completely new things. The .NET Core libraries, .NET core 1.0 and ASP.NET Core 1.0 are completely rewritten and redesigned. The new names and the version 1.0 makes absolutely sense now.

BTW: this also happens to Entity Framework 7, which is now called Entity Framework Core 1.0 or EF Core 1.0.

image source http://www.hanselman.com/blog/ASPNET5IsDeadIntroducingASPNETCore10AndNETCore10.aspx

To get more information about the changes, read Scott Hanselmany Blog post about it and watch the latest ASP.NET Community Standup

There is another benefit with that name changes, I think: Now it is possible to update the "classic" ASP.NET 4.6 and .NET Framework to a new version 5.0 in the future without confusing all the .NET developers ;)

The only bad thing about this is, there are too much changes while ASP.NET Core is in RC with a go-live license. This is a little bit critical. This changes should have been done in the beta state.

What do you think about the name and version changes?

Digital Signage mit PlayPoster von OOSM

19.01.2016 07:01:00 | Martin Hey

Wer hat sie noch nicht gesehen - die Werbemonitore in den Geschäften, die Info-Screens bei Veranstaltungen. Digitale Schilder oder auch Digital Signage sind beim Marketing nicht mehr wegzudenken. 

Auch bei uns im Büro sollte so ein Screen her. Weihnachten war es dann so weit - im Büro stand ein nagelneuer Fernseher, der auch schnell recht professionell an der Wand hing. Doch wie geht's dann weiter? Ok, für ein paar statische Inhalte funktioniert das recht gut - USB-Stick angeschlossen und schon kann eine Slideshow von Bildern angezeigt werden. Aber das war's dann auch schon.

Unser Screen soll nicht nur Marketingfunktionen erfüllen, sondern ich möchte dort auch nützliche Informationen für die Mitarbeiter anzeigen und diese sind nur dann wirklich nützlich, wenn sie dynamisch erzeugt werden. Wir sind ein Unternehmen, das Software herstellt. Also was sind denn nützliche Informationen? Ich denke hier an:
  • den aktuellen Projektstatus (also das Sprint- oder Projekt-Burndown)
  • den Status des Buildsystems (welche Builds laufen gerade; gibt es irgendwo Builderrors oder fehlgeschlagene Tests)
  • Informationen zum Unternehmen oder zur Abteilung
  • Informationen zu Community-Events
  • Informationen aus sozialen Medien (z.B. einen Twitter-Feed)
Das alles lässt sich mit der statischen Dia-Show die der Fernseher mitbringt nicht lösen. Da muss etwas anderes her.

Im Frühsommer letzten Jahres war ich in Amsterdam zu einem Startup-Event und habe dort die Köpfe eines Unternehmens kennengelernt, die genau so eine Digital Signage Lösung herstellen: OOSM (sprich: awsome) mit Ihrem Produkt PlayPoster. In den Gesprächen mit dem CEO Peter Bruner stellte sich heraus, dass der Stick von OOSM vermutlich genau die einfache Lösung ist, die wir brauchen.

Also habe ich einen Stick bestellt. Nach wenigen Tagen war der dann auch in meinem Briefkasten. Die Installation ist denkbar einfach und in 3 Schritten erledigt: 
  • Stick auspacken und an Stromversorgung und HDMI-Port des Bildschirms anschließen
  • WLAN-Credentials eingeben (dafür braucht man mal kurzzeitig eine USB-Maus)
  • Stick kurz vom Stromnetz nehmen, damit er rebootet
Das war's auch schon. Ab jetzt öffnet sich bei jedem Boot automatisch die vorinstallierte App und zeigt die definierten Inhalte an.

Da kommen wir auch schon zum nächsten Punkt: Definieren der Inhalte. Denn ich wollte ja statische und dynamische Inhalte darstellen.

Bei der Bestellung erhält man Credentials zum Admin-Portal von OOSM. Dort definiert man sogenannte Playlists, die man einer Gruppe von Devices zuordnen kann - also ein Device ist in einer Gruppe; eine Gruppe hat eine oder mehrere aktive Playlists (ganztägig oder für einen bestimmten Zeitraum); eine Playlist besteht aus Bausteinen, für die eine Reihenfolge definiert wird. Wie das genau geht, erkläre ich in einem späteren Post.

Digital Signage Lösungen gibt es viele am Markt, deswegen hier neben dem persönlichen Kontakt zum Entwicklerteam noch ein paar Hard-Facts für die Entscheidung für PlayPoster. 

Einfache Installation mit vorhandener Hardware (+)
Es muss ja nicht immer der neue TV sein - wir alle haben igendwelche alten Monitore herumstehen, die wir nicht mehr brauchen und die super für Digital Signage geeignet sind. Einfach den Stick anschließen, WLAN einrichten, fertig. Und der Stick ist mit Abmessungen von 85 x 30 x 10 Millimeter wirklich nicht groß.

Cloudbasierte Definition (+)
Ich kann von zu Hause oder von unterwegs steuern, was auf meinen Screens im Office wann angezeigt wird.

Niedrige Kosten (+)
Die Anschaffungskosten sind vergleichsweise sehr gering - andere Digital Signage Produkte kosten über 100 Euro pro Device, die aktuellen Kosten (Stand Januar 2016) des PlayPoster Sticks sind 59 Euro einmalig ohne weitere nutzungsbasierte Zahlungen.

Betaphase (+/-)
Das System ist noch in der Betaphase. Man stolpert also noch manchmal im Admin-Portal über Punkte die von der User Experience noch nicht optimal sind, aber das Entwicklerteam freut sich über Feedback und behebt Bugs (wenn man welche findet) super schnell.

Viele vordefinierte Templates (+)
Für die Bausteine gibt es vordefinierte Templates, die schon viele Anwendungsgebiete abdecken:
  • statisches Bild (also quasi wie Diashow)
  • externe Webseite (z.B. für unsere Systemstatus-Anzeigen)
  • Tweets (anhand Hashtag oder Person)
  • RSS-Feed
  • Gimmicks wie Uhr, Horoskop, Wetter
  • verschieden gelayoutete Bild-/Text-Screens (Preisliste, Bild mit Text)

Nur FullHD (-)
Der Stick in seiner aktuellen Konfiguration unterstützt lediglich FullHD und noch kein 4K. Für uns reicht das im Moment. Nach der Aussage von OOSM soll die nächste Generation aber definitiv 4K-fähig sein.

Damit genug zum Thema Installation. In meinem nächsten Post zeige ich, wie ich meine Playlist für PlayPoster zusammenbaue.

PaperMe ported from silverlight to HTML5 and bootstrap

17.01.2016 22:44:00 | Daniel Springwald

PaperMe is a tool to design papercrafts from photos. I created it in 2010 using Silverlight which seemed to become a good alternative to flash.

Unfortunately flash *and* Silverlight “died” because they are not supported on mobile devices.
So it was time for a total makeover.



It took a lot of work porting PaperMe to HTML5 and responsive Design, but now it´s finished.



If you are interested to create a personal papercraft of yourself, your partner or your favorite celebrities –  just give PaperMe a try :-)


To 'var' or not to 'var'

14.01.2016 17:00:00 | Jürgen Gutsch

There are many discussions out there about to write 'var' or the concrete type, if you declare a variable. Personally since it is possible to use 'var' in C#, I always write 'var' whenever it is possible and every time I am refactoring a legacy application I remember the reasons why it is important to write 'var' instead of the concrete type.

In the last months I worked a lot with legacy applications and had a lot of effort on refactoring some of the codes because of the concrete type was uses instead of 'var'.

Many people don't like 'var'. Some of them because of the variant type in the VB languages. These guys are using C# but they still don't know C# well, because 'var' is not a variant type but a kind of a placeholder for the concrete type.

var age = 37;

This doesn't mean to declare a variant type. This means to tell the compiler to place Int32 where we wrote 'var'. 'var' is simply syntactical sugar, but with many additional benefits.

The other people who don't like 'var' wanted to directly see the type of the variable. (From my perspective they don't know Visual Studio very well.) This guys opinion is, that 'age' could also be a string or a double. Or maybe a Boolean. Just kidding. But it seems they don't trust variable names and assignments.

My thoughts about 'var'

If I read the code, I directly see that 'age' is a numeric. While reading the code, in the most cases, it is not really important to know what type of number it is. But in this case, a integer makes more sense, because it is about an age of something. Writing meaningful variable names is very important, with or without 'var'. But using meaningful names and the concrete type declaring a variable we have a three times redundancy in just three words:

int age = 37;
// numeric number = number
// we know that 'age' is always a number ;)

More cleaner, more readable and with less redundancy is something like this:

var age = 37;
// variable age is 37

I don't read 'var' as a type. I read 'var' as a shortcut for just 'variable'. The name and the assignment tells me about the type.

And what about this?

var productName  = _productService.GetNameFromID(123);

Because I trust the names, I know the variable is of type string. (Any kind of string, because it could be a custom implementation of string, but this doesn't matter in this line of code.)

While refactoring legacy code I also found something like this:

string product = _productService.GetNameFromID(123);

In the later usage/reading of the variable name 'product', I'm not really sure about the type of product and I would expect a 'Product' type instead of a string. This is not a reason to use the concrete type, this is a reason to change the variable name instead:

var productName = _productService.GetNameFromID(123);

Because names are strings in the most cases, I would also expect a string.

Let's have a look at this:

var product = _productService.GetFromID(123);

We are able to read a lot out of this simple line:

  • It is a product
  • The type could be Product, because we are working with products
  • It has an ID which is numeric

Hopefully it is true ;) To be sure I can use the best tool to write C# code. In Visual Studio just place your mouse over the keyword 'var' to get the type information. VS knows the information from the return type of the method GetFromID(); That's simple, isn't it?

To see the type information is not a good reason to write the concrete type.

Another reason is readability. Lets have a nested generic type:

IDictionary<String, IEnumerable<Product>> productGroups = _productService.GetGroupedProducts();

Is this really a good and readable solution? What happened if you change the type of the groups from IEnumerable to something else?

Doesn't look this pretty more cleaner and more readable?

var productGroups = _productService.GetGroupedProducts();

I know, it is not always possible to write 'var', e.g. If you don't assign a value, you have to write a concrete type. In method arguments you always have to write the concrete type. A return value always needs to have a concrete type definition, even dynamic is a concrete type definition in this case. ;)

The most important reason to write 'var' is refactoring. To reduce code changes while refactor code, you should use this useful keyword, because it doesn't need to be changed.

Product product = _productService.GetFromID(123);

If we need to change the type of the returning value of the method because of any reason, we also need to change the type of the variable definition. Let's simplify this only a little bit:

var product = _productService.GetFromID(123);

Now we don't need to change anything in this line of code.

On the customer side I had to mask a domain object and it's dependencies with interfaces to simpler do refactorings later on. Extracting the interface wasn't a big deal. But the most code changes are the replacing of the concrete type on the variable declarations. Sure ReSharper helps a lot, but this domain object was used in many different and huge solutions. This couldn't be done in one step. If they would have used 'var' in all possible cases, we would also have reduced the needed code changes a lot.

Conclusion

The keyword 'var' helps you to easier maintain your code, it reduces code changes and redundancies and it makes your code more readable. Use it whenever it is possible. It is not a variant type, it is a shortcut of the concrete type. It doesn't hide type information, because the assignments and the variable name contains the needed information and Visual Studio helps you to know more about the variable if needed.

XML parsing problem while trying to query SharePoint Online

14.01.2016 17:00:00 | Jürgen Gutsch

Yesterday it works and today it don't work. You possibly know that. Usually there are some code changes, if something like this happens. Some days ago there are now code changes, now new libraries referenced. I just didn't work. And I didn't know what happened here.

I got a XmlException which tells me that my application can't parse a XML result because of the DTD:

System.Xml.XmlException: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the DtdProcessing property on XmlReaderSettings to Parse and pass the settings into XmlReader.Create method.

This Exception was thrown deep inside a SharePoint Client library, when my Application wants to query some user information from SharePoint. The SharePoint Context was fine. I lost 3 hours to find out what changed until the last day. I had a second look to some pull request, I checked the Git history and I checked my .NET environment. I also asked the available team members. But it happened only to me. This was pretty confusing and annoying.

There were no relevant code changes from the last day, to the day when this problem happens. But there was another huge difference: At this day I worked at home. The day before I was in the Office in Basel. A team member asked me about that and sends me a Link to a StackOverflow thread, where some other developers almost had the same problems: DTD is prohibited” error when accessing sharepoint 2013/office365 list. All of them wanted to query information from Office 365 and SharePoint Online. One of them got this exception using a WiFi extender. Another one got that because, his ISP (Internet Service Provider) provides a custom error page, if he can't resolve a specific domain.

I started Fiddler, to see what happened here. I don't use a WiFi extender, but at home I use a different ISP than in the Office in Basel.

Sniffing the HTTP Traffic shows me what happened. I had exactly the same problem as the second person on StackOverflow. In my case it was the msoid.[companyname].emea.microsoftonline.com which couldn't be resolved by my ISP:

My ISP provides a feature called "navigation help", which is a custom error page, which includes a web search for the not resolved host header. That means if the ISP can't resolve a domain name (host header) he provides a page which some help to solve the problem. Which is a good feature in general. But the real issue is they send the page with HTTP Status 200 which means it is all fine with the result and the SharePoint client library tries to parse the returning HTML result, but expects a XML result. Exactly this throws that XML Parsing Exception.

The guy on stack overflow could solve this, by switching this feature off. I fortunately found a hint on that page of my ISP too, to switch this "navigation help" feature off. And this solves the problem.

Switching it off and restarting the router solves an issue that costs me more than 5 hours.

My application is a web application based on ASP.NET, it will be hosted on Azure. This issue will not happen in production. But if you are developing a client application, which needs to connect to SharePoint Online this could definitely happen, if your users changing the work space (working at home, at a restaurant or somewhere else) to a space with a different ISP, which also provides something like this.

Getting exactly this exception while querying information from SharePoint Online means the returning result is not the expected XML what can only be happen if XML Parser don't get valid XML. The not resolvable host header is not the real problem, because the client library seems to use a fallback in this case. The problem is, that the ISP possibly returns a wrong HTTP Status, if the host header can't be resolved.

OutOfMemoryError beim Kompilieren von Xamarin Projekten

13.01.2016 19:01:00 | Martin Hey

Heute hab ich versucht in ein Xamarin Android Projekt das Google Ads Framework hinzuzufügen. Nichts leichter als das: Nuget und dort Xamarin.GooglePlayServices.Ads ausgewählt. 

Die Ernüchterung kam recht schnell: Ab diesem Zeitpunkt ließ sich mein Projekt nicht mehr kompilieren. Der Build dauerte viel länger als gewohnt und brach dann mit dem Fehler
java.lang.OutOfMemoryError. Consider increasing the value of $(JavaMaximumHeapSize). Java ran out of memory while executing 'java.exe'
ab. 

Nun befinde ich mich ja in Visual Studio und hab dort nicht wirklich eine Einstellung gefunden, die mir weiter geholfen hätte. Nach einiger Internetrecherche hier nun die Lösung: In der csproj-Datei muss folgender Code hinzugefügt werden: 
<PropertyGroup>
    <JavaMaximumHeapSize>1G</JavaMaximumHeapSize>
</PropertyGroup>
und schon läuft der Build wieder durch.

INETA Germany

11.01.2016 17:00:00 | Jürgen Gutsch

After 8 years being the heads of the INETA Germany, Lars Keller and Karim El-Jed, will leave the INETA Germany. The reason is, that Lars is working for Microsoft Germany since November 1st and he wants to ensure that the INETA will stay independent from Microsoft. Also the co-lead Karim will leave to focus more on supporting his own .NET user group.

This means the INETA needs two new heads and Lars found some new.

Ulrike Stirnweiss will be the new co-lead and I will be the new lead of INETA Germany

I'm pretty proud to work with Ulli to support the German .NET user groups with the Speaker Bureau and with budget to pay travel costs of the registered speakers.

Maybe you read about that INETA North America will quit until the end of the year. At the beginning of November this year, I talked to their lead. It seems the main reason why they will shut down is, that there seems to be no longer a need to support the user groups in this way. It seems the North American user groups and the available speakers are very well connected and they are managing all that stuff by their own.

We will keep the INETA Germany alive and hopefully do a little more to support the German user groups and the German .NET community in general. Also INETA Europe will stay alive with the European Speaker Bureau.

Currently we have some Ideas (and we got a few pretty cool ideas from Lars) to improve the support for the German user groups and make the INETA Germany a little more present in the German .NET community.

In the near term we will keep all the things as they currently are. This also means Ulli will do Karims Job and I do the same stuff Lars did until the end of 2015. Ulli will be responsible for all things around the website, marketing, and so on. I will continue with Lars Tasks, being responsive for the Speakers Bureau, the user groups and the sponsoring.

If you have any feedback about anything to improve, anything you miss, please drop us a short note. Write to hallo@ineta-germany.de, contact us on any channel like Twitter, Facebook and so on.

Thanks

At the End of this post I have to say Thank You:

  • to Lars and Karim, who did a great job the last 8 years and who will support us in the first few months being a INETA lead. :)
  • to Torsten Weber who supports us in the back-end, hosting the website, mail servers, and so on. This makes this job a lot easier. :)
  • to our current top sponsor Microsoft Germany, which is also keeping the Speaker Bureau alive with the annual sponsoring. :)

BTW

If you want to be a sponsor of the INETA Germany to support the German user groups and the German .NET community please drop me a note. I would be happy to send you detailed information about the benefits of being a sponsor of INETA Germany. :)

The right way to deploy a ASP.​NET application to Windows Azure

07.01.2016 17:00:00 | Jürgen Gutsch

Deploying a web site continuously to an Azure Web App is pretty easy today. Just click "Set up deployment from source control" and select the source code provider you want to use and continue to log-on to your provider and select the right repository.

Now you get a new deployment tab in your Azure Web App where you can see the deployment history, including possible deployment errors.

You will find a more detailed tutorial here: Continuous deployment using GIT in Azure App Service

This is pretty cool, isn't it?

Sure, it is. But only if it is a small website, a small uncritical app or a demo to show the easy deployment to azure. The deployment of this Blog is set up in this way. With this kind of deployment, the build of the application is done on the Azure Web App with Kudu which is working great.

But I miss something here, if I want to deploy bigger and more complex web application.

How can I run my unit tests? What about email notifications on broken build? What if you need some special tasks while or before building the application?

You can add a batch or a powershell file to manipulate Kudu process to do all this things. But there is too much to configure. I have to write my own scripts to change the AssemblyInfos, to send out any email notification, to create test reports and so on. I would write all the things, a real build server can already do for me.

I prefer to have a separate real build server which does the whole job. This are almost all tasks I usually need to do on a continuous deployment job:

  • I need to restore the packages first to make the builds baster
  • I need to set the AssemblyInfo for all included projects.
  • I need to build the complete solution
  • I need to run the unit test and possibly some integration tests
  • I need any deployment
    • a web application to an Azure Web App
    • a library to NuGet
    • a setup for a desktop application
  • I need to create a report of the build and test results
  • I want to send an email notification in case of errors
  • I want to see a build history
  • I want to see the entire build output of a broken build Dependent on the type of the project there are some more or maybe less tasks to do.

I prefer Jenkins as a build server but this doesn't really matter. Any other real build server can also do this this work.

To reduce the complexity on the build server itself, it only only does the scheduling and reporting part. The only thing it executes is a small batch file which calls a FAKE script. Since a while FAKE gets my favorite build script language. FAKE is an easy to use DSL for build task written in F#. MsBuild also works fine, but it is not as easy as FAKE. I used MsBuild in the past to do the same thing.

In my case Jenkins only fetches the sources, executes the FAKE script and does the reporting and notification stuff.

FAKE does the other tasks including the deployment. I only want to show how the deployment looks like with FAKE. Please see the FAKE documentation to learn more about the other tasks it. There are many examples and a sample script online.

This is how the build task to deploy a ASP.NET app in FAKE looks like

// package and publish the application
let setParamsWeb = [
       "DebugSymbols", "True"
       "Configuration", buildConf
       "Platform", "Any CPU"
       "PublishProfile", publishProfile
       "DeployOnBuild", "true"
       "Password", publishPassword
       "AllowUntrustedCertificate", "true"
   ]

Target "PackageAndDeployWebApp" (fun _ ->
    MSBuild buildDir "Build" setParamsWeb ["My.Solution/My.Project.Web.csproj"]
     |> Log "AppBuild-Output: "
)

The parameter listed here are MsBuild properties. This all looks like a usual MsBuild call with FAKE and it really is a simple MsBuild call. Only the last four parameters are responsible to deploy the web app.

We need to add a publish profile to our project. To get this you have to download the deployment settings from the web apps dashboard on Azure. After the download you need to import the settings file to the publish profiles in Visual Studio. Don't save the downloaded file to the repository because it contains the publish password. The publish profile will be saved in the web apps properties folder. Just use the file name of the publish profile here, not the entire path. I pass the profile name from Jenkins to the script, because this script should be as generic as possible and should be used to deploy to development, to staging and to production environments. The publish password is also passed from Jenkins to the FAKE script, because we don't wont to have passwords in the GIT repository. DeployOnBuild calls the publish target of MsBuild and starts the deployment based on the publish profile. AllowUntrustedCertificate avoids some problems with bad certificates on Azure. Sometimes MS forgets to update their certificates.

All variables used here are initialized like this:

let buildDir = "./output/"

let buildConf = getBuildParamOrDefault "conf" "Retail"
let buildNumber = getBuildParamOrDefault "bn" "0"
let buildVersion = "1.16." + buildNumber

let publishProfile = getBuildParamOrDefault  "pubprofile" ""
let publishPassword = getBuildParamOrDefault  "pubpwd" ""

To pass any variable from the build server to the FAKE script just change the sample batch file a little bit:

@echo off
cls
"My.Solution\.nuget\nuget.exe" "install" "FAKE" "-OutputDirectory" "tools" "-ExcludeVersion"
"tools\FAKE\tools\Fake.exe" ci\build.fsx %*
exit /b %errorlevel%

Sure it isn't such easy as the Kudu way, but it is simple enough for the most cases. If the Build needs some more complex tasks, have a look at the FAKE documentation and in the corresponding Git repository. They have a solution for almost all the things to do in a build. But the best thing about F# is, you can easily extend FAKE with your own .NET code written in C#, F#, whatever...

Neues Jahr–Neue Aufgabe

06.01.2016 13:14:40 | Christian Binder

Ich wünsche Allen ein gutes und erfolgreiches 2016! Ich selbst werde in 2016 eine neue Rolle als Technical Director für das Microsoft Technology Center  in Deutschland wahrnehmen. Dem Thema Engineering & DevOps werde ich natürlich treu blieben und freue mich, die Community auf den ALM Days 2016 und anderen Konferenzen zu treffen. Über den Blog bin ich weiter zu erreichen und werde auch weiter posten. Auf ein gutes 2016 Smile

Chris   

How to install the latest version of nginx on debian 8.1

06.01.2016 11:32:33 | Andreas Mehl

I am going to show you how to install the latest version of Nginx web server on debian 8.1

1. check if already a version exists

apt-show-versions nginx

2. i recommend to remove the current installed version to prevent errors. Be sure to make backup of your config in other places.

apt-get remove nginx nginx-common # Removes all but config files.
apt-get purge nginx nginx-common # Removes everything
apt-get autoremove # After using any of the above commands, use this in order to remove dependencies used by nginx which are no longer required.

3. to install new version run the commands below to download Nginx repository authentication key

cd /tmp/ && wget http://nginx.org/keys/nginx_signing.key

4. run also the commands below to install the repository key

apt-key add nginx_signing.key

5. to install the repository key, open the source list

nano /etc/apt/sources.list

6. copy and paste the lines below into the file and save it

deb http://nginx.org/packages/mainline/debian/ jessie nginx
deb-src http://nginx.org/packages/mainline/debian/ jessie nginx

7. to be sure which version you going to install run following command

apt-get -s install nginx

8. finally, run the commands below to install Nginx

apt-get update && sudo apt-get install nginx

9. check which version you have installed

nginx -V

That’s it! This is how you install the latest version of Nginx on debian 8.1 Jessie. This tutorial also applies to earlier version of debian. It may apply to future version of debian if Nginx repositories don’t change.

Have fun:) Enjoy!

 

Was ist Visual Studio Code? Ein kostenloser Open Source Code-Editor fuer Windows, Linux und OS X

05.01.2016 13:52:32 | Kay Giza

Visual Studio Code ist ein leistungsstarker, schneller Code-Editor zum Entwickeln und Debuggen moderner Cloud- und Webanwendungen, der in Versionen für Linux, OS X und Windows kostenlos zur Verfügung steht. Das Open-Source-Tool ist ideal für den täglich Gebrauch geeignet und ist so die perfekte Ergänzung zu den bereits genutzten Entwicklerwerkzeugen. Das Tool arbeitet, anders als von Visual Studio gewohnt, nicht auf Basis von Projektdateien, sondern auf Datei- und Ordner-Ebene. Der Funktionsumfang von Visual Studio Code kann dank 1.000+ Extensions (also Erweiterungen) unabhängig vom Betriebssystem flexibel erweitert werden. Aufgrund einer integrierten Update-Funktion können Nutzer sicher sein, stets mit der aktuellsten Version zu arbeiten.... [... mehr in diesem Blogartikel auf Giza-Blog.de]

This post is powered by www.Giza-Blog.de | Giza-Blog.de: RSS Feed
© Copyright 2006-2016 Kay Giza. All rights reserved. Legal

Regeln | Impressum