2017 Kermanshah earthquake

A deadly earthquake happened in Iran-Iraq border inside Kermanshah province in 12 November 2017 at 21:48 Tehran local time. This earthquake that is internationally known as 2017 Iran-Iraq earthquake, killed 530 in Iran and 10 in Iraq. It also wounded near 7,000 people in Iran. Now, 9 days after the earthquake, I learned new points from that.

Social medias are believed that had a new impact on the this natural disaster. Media activists says that publishing earthquake news through social medias like Telegram and Instagram form the first hour of the incident has increased sensitivity of the government. This is what I personally felt despite I have not been on details of previous earthquakes like 2012 East Azerbaijan earthquakes and 2003 Bam earthquake. Every channel and group on the Telegram and every account on the Instagram were discussing about Kermanshah earthquake.

Many people from outside Kermanshah sent their none cash aids to troubled ones in the earthquake. While there is a coordinator organization named Iranian Red Crescent, many people preferred not to help through this organization. This caused incorrect aids. Injured people got goods that they do not need that at the moment. Also some goods like mineral waters wasted because of their large quantity.

I heard some numbers of fake news or fake photos. In 2 of them I saw an orphan saved from under ruins, while it was from Nepal earthquake not Kermanshah. Also saw a photo of a girl kid that was actually from Iraq civil war not Kermanshah earthquake. Many people preferred informal medias and news over formal sources like TV or newspapers in opposite of my personal style. In this case I liked IRNA, IRINN, national newspapers, Iranian Red Crescent and some confident social media channels that I personally know.

Some claims have been made during the whole incident. Like other earthquakes, this earthquake assigned to HAARP too! Commentators with weak background of science started to discuss this earthquake and relating to HAARP or a hidden weapon test. On of them go further and said that this an artificial disaster created by some powers to kill Kurds! Some of them tried (or maybe real?) to describe that helps are sent to Shia strickens more than Sunnis strickens.

Political competitions arise again just in middle of the earthquake. Current government cabinet attacked previous government attack by saying that have built low-quality buildings during Mehr housing plan. They did it just in the rush days of the earthquake without a deep study. On other hand, former cabinet mans responded to this political war.

During social media traffic after the earthquake, I found a group of volunteer developer who started working on a mobile app to manage and enhance help distributions during the earthquake. Their work is named “Kermanshah earthquake GIS system” (in Persian, سامانه جی ای اس زلزله زدگان کرمانشاه). This project still is in progress, but I hope it will create a helpful product.


A good architecture for an ASP.NET Core application

From June 2016 that ASP.NET Core 1.0 released I have used it at least in 2 projects. Each of them learned me new points especially in project architecture including layering, DI, mapping and DTOs. While ago I wrote about my experience here. Now I am trying to enhance previous experiences and describe a project based on the enhanced structure.


1- Layers

One of most important decisions is project layers. Personally I do not like multiple layers, but here I choose to have 3 layers for a good reason. I want to hide database from presentation. I do not like Controllers or Web APIs be aware of internal structure of tables and fields. Because in this way:

  • Designing controller actions and Web APIs are easier as they do not have to know everything about internal table designs

  • Security is higher. As ASP.NET binding does not fill input parameters with data from user. Indeed as services are not aware of complete model (table) design, they can not bind it incorrect or malicious inputs of the user.

  • Avoiding dirty checking mechanism of ORMs. If you receive an entire db model, there is a change that Entity Framework detects it as a dirty object and tries to save it in database while you did not mean it.

  • Avoiding confusing mappings by have only needed properties

It is my suggested layering:

ASP.NET Core project layers
ASP.NET Core project layers
ASP.NET Core project layers[/caption]ASP.NET Core project layers[/caption]ASP.NET Core project layers[/caption]


User works with presentation layer. Presentation layer is aware of only service layer and transfer data to it via DTOs. Service layer in turn communicates with data access layer via database models. In other hands service layer isolates presentation and data access layers from each other. Data access layer contains anything that should not be visible to presentation layer.

Presentation layer can be supposed as a thin layer as it contains database models and DbContext only. Service layer contains repositories and all services. Presentation layer contains ASP.NET controllers, cshtml and css files.


2- Repository

Entity Framework’s DbContext itself is a repository itself. Normally there is no need to warp its Add method. Unless in enterprise projects that special processes are needed on each add or update. For example setting last update time in a specific field. Adding extra repository on DbContext/DbSet makes it harder if you want to update just some fields of a record.


3- Unit of work

Unit of work is usually not a tough problem. You simply call SaveChanges() of DbContext on Dispose method of the controller. This provides automatic unit of work to all actions. But wait, there is a special case that this is not a good idea. What if a problem occurs while committing changes to the database?

You will not be aware of that. Worse than it your changes fail to commit to database and the user will not be even aware of it. Because when Dispose method is called, user response has been sent to the user’s machine and it is too late to inform him/her. My idea is not using Dispose method and call SaveChanges() manually on each Controller action so you can detect possible errors and inform the user about it.


4- Handling validations in application or in database?

One popular approach to do database validations is to commit data changes into database and see if everything goes well or not. For example putting duplicate values into a unique column does not cause errors in application side, but when it is sent to database it will generate an error complaining about duplicate values.

There 2 approaches here. First, leave it as is and let database to control our business logic for us. This approach almost needs no effort to implement as everything is passed to database side. But it requires accurate model definition as that will be converted to database schema. This approach is dependent on underlying database. If underlying database changes, there is chance that behavior of the system changes. One thing that works on MSSQL 2014 may not work as same as SQLite. This approach also make unit tests hard. As some business logic is not in the code so can be unit tested.

Second approach in not relying on database. All validations and rules are checked in application itself. Here there is no dependency on database and unit test can be applied well but this need extra codes and also have performance penalty as database is hit more than once. At least one time for validations and one time for committing changes to the database. Normally my personal choice is to do validations in application side.


5- A class for each service

I has been used to use some big classes for services and use multiple DTO or ViewModel or Model classes to get input from ASP.NET binder from user input. But now I think it is not a good idea. It was causing large service classes that can be considered God classes along with multiple model classes that contains only simple properties and no methods.

Now I prefer to use a separate class for each single service. It contains all model properties and needed code for implementing that service. It is more OOP and it is more manageable. In ASP.NET Core I use IServiceProvider to the class so it can get needed services form ASP.NET Core internal DI. See a sample code:


6- Misc points

  • Model’s Id type to be GUID not auto increment integer value. It has better database performance as you do not need to query to database again to get assigned id.

  • Not using interfaces at all. It is not useful.

  • Use bower or similar tools to install client side frameworks.

  • Be careful while using AutMapper. Properties can be simply different in 2 sides and no error is raised when it is.

An outsource experience

Recently I have been working on an outsource/remote project on a software solution consisting of both web and mobile parts. As I am always interested in outsource projects, any experience on it is important for me. So I am trying to document my experience on this project in an almost none technical perspective.

Keep calm and outsource.

  1. In this sample project we had project management but it was not enough. Not complying sprint patterns was a main issue. New issues were adding to current sprints without noticing that this reduces planning effects.
  2. A similar issue was losing focus by working tasks outsides of current sprint.
  3. In some cases, there were tasks which resolved with only 0.5 hour and there were tasks were resolved with about 12 hours. I mean tasks were not broken into same pieces. Some of them were actually more than 1 tasks and some of them could be simply merged with other tasks.
  4. Too many changes in requirements really caused delays and was hard to apply. I know in agile environments, changes are normal, but I mean too many changes. Consider too many changes in database table structures that caused many other changes in back-end and even APIs.
  5. This project had enough amounts of documents from first day. But in some parts this was not clear enough. Documentation problem was getting more problematic as project was growing and new people was adding to the project. It was better that question/answers by team members be added to the documentation but it was not. Documentation could be more up-to-date and be more structured.
  6. I forced to implement soft delete in the system tables but after a while I realized that we were not really needing it. All in all that customer wanted was logging record changes. I am not sure that call it bad communication or letting product owner to take technical decisions.
  7. Not using full featured ALM tools was causing negative impact on productivity. Having a tool to publish latest versions automatically based on each git push could help us having more up-to-date test server.
  8. Not all team members were comfortable with written culture of remote working environments. In a team spread in several cities, it was important that any activity logged in Jira, Slack, Emails, … Any member should be able to have information about other works and tasks. This is more important when team members common time are not very large.
  9. As a team that works in different time zones with different work schedules we had a bad problem that was long wait time between actions. Member A books a bug in bug tracking system, hours or even 1 day later member B want to resolve the bug but need more info, he adds comments but member B see it 1 day later and it is going on. You see resolving a bug can take several days. One of members A or B could have solved this bug with higher problem solving skills. Member A could be able to understand the possible bug by trying more inputs and putting system in more states. And member B could be more successful by thinking on the behalf of A and try to solve the issue with less round trips.
  10. Team organization was not so ideal. Breaking team to web part and mobile part caused tracking issues harder. We could be more agile if 2 parts was able to run other parts code by themselves. But when in a small team, works are passed via test servers not in source code, then more time is needed to test an even small task.


I believe that this kind of issues have almost 3 roots. Cultural differences that forces us to have different impressions of team roles. For example, from scrum master or back-end developer. Another root is not putting enough time for controlling the team and take actions on weaknesses. And last one in my point of view is that the team has not worked with each other before this. A team needs time to reach its full power. Team members need time to get acquainted which each other.


For a technical review on this project, see here and here.

Creating a framework to be used as a base for many other applications

Oh, interesting, received another retrospective just few days after last one. We have developed a base ASP.NET WebFrom framework named ABC and then developed a series of web applications based on it on 2010/2011. One of them is DEF that is currently (late 2016) in production heavily and I guess will be in production at least till 2021 or even 2025 (no retirement plan yet). DEF is dealing with a database more than 10,000,000 records and is used country wide as a national project. DEF is encountering performance problems and is under constant but small changes of the client.


Many companies specially those that are not very technical at managerial level love to write code once but use it in many projects more than once. This is why many IT companies have internal frameworks and many people like myself are searching if it is a good idea at all and if yes, what's the best platform to do that. Creating ABC and making DEF and other applications based on it is a good sample for this approach. I'm gonna to review its real weaknesses as it has been in production for few years now.


Large Data

ABC has been used as base project of many applications but no one were dealing with large amount of database records as DEF. At the other hand ABC was not designed to deal with that large amount of data. So DEF has performance issues more than other ABC based projects. Performance issues in turn causing other issues relate to save some records in situations that system load is high.



As ABC is base framework and many applications based on it are in production and some of them have multiple instances, so upgrading ABC is hard. Suppose I want to upgrade a component in ABC to enhance some features, this upgrade may cause problems in others, at least I can not be sure that this upgrade is safe for all applications. In DEF case we needed to upgrade Nhibernate and did it in painful and very lengthy process.


Internal mechanism

Like upgrade problem, we have difficulties in changing internal mechanisms and designs. For example changing transaction management is somehow necessary for DEF but it must be done through ABC. And as others are using ABC too, it is not an easy way, and sometimes it is impossible to be accomplished. As a result DEF is forced to live with problems that we know why exists and if DEF was standalone, how can be corrected.


Do everything through application channel

For a small application that can be run from a shared host, it is not a bad idea if every operation is done via web application itself. But in a large application like DEF there are situations where other tools are needed. For example we have batch operations that take likely 30 or 60 minutes to complete. A good tool is to use windows services to do this type of work. But DEF uses ASP.NET and IIS to do batches that is not good. Many application pool restarts occur during batch or lengthy operations. Also they reduce speed of current logged users and decrease IIS resources and possibly cause secondary problems. Another example is handling a database table with large record count. We handled it hardly in the application while a better way was to introduce a secondary database and define jobs to move old records to it so main database remaining lighter.


Creating packed mega APS.NET controls

If you are familiar with ASP.NET WebForms you know that we have plenty of ASP.NET controls available there, like drop-down-box. In ABC we have had created some mega controls like them to do bigger operations. Think that they were similar to Telerik or Dundas controls, but larger and wider. For example a grid that was able to do paging, sorting and searching. In theory they were very useful and time saving but they were tightly coupled with other internal structure of the ABC and was very inflexible.



General purpose frameworks are seen very promising while not used, but in production many cavities are explored. They are good when used in short term and very very similar applications. If you want speed and flexibility think more about “creating each application from scratch” strategy.

Review structure of a web application that I’m working on

There is a work-flow in Scrum called retrospective. It is about reviewing work done in a sprint. I love the idea, I think talking and communication in a software development team is very important. Inspired from scrum retrospective I'd like to have a review on architecture and design of a project that recently I've been involved in. The project is not finished yet but I think it is a good time to review its structures.


The project back-end is implemented by ASP.NET MVC Core and Entity Framework Core and is serving both web API and server backed contents (ASP.NET MVC). Develop is done mostly in Ubuntu but also in Windows too.

copyright https://hakanforss.wordpress.com/2012/04/25/agile-lego-toyota-kata-an-alternative-to-retrospectives/

Projects Structure

While project is not very big and complex and have about 20 tables on the database we decided to have 4 projects. One for domain, dtos and contracts named Domain, another for mainly business logic called Core, another for web project itself that contains cshtmls, controllers and wwwroot directory called Web and another for unit tests called Test. I do agree this is a very common project structure for ASP.NET web applications but I saw not benefit over them except than a simple categorizing that also was achievable with directories. I think it was better to have 2 projects, one for web (combining domain, core and web) and another for test.



Programming against interfaces are very popular in C#. It has gained more popularity with wide usage of dependency injection in ASP.NET. Also ASP.NET Core have a built in dependency injection that has increased popularity programming against interfaces. We followed this manner in our project. We create an interface per each service class. We have no mockups in our unit tests so I think using so many interfaces are a bit over-engineering because we have no interface in our project that has more than one implementation. Having large number of interfaces just decreased development velocity. Adding each new method needed changes in two place, service itself and interface that it was implementing.


Soft Delete

Soft delete means not deleting database records physically, but instead keep them in database and set a filed in each record named IsDeleted to true. So in the application soft deleted records are not showed or processed. We add this feature to the application so we can track data changes and do not allow any data being really loosed. For this purpose we could have used logging mechanism in a way if a record is deleted a log is added that says who deleted what data and when. Implementing soft delete imposed many manual data integrity check to the application. With delete of each record we must check if any dependent item exists or not, if so prevent deletion.



I personally made authorization over-engineering. I checked roles in both controllers and services in many cases. My emphasis on separation of controllers and services was a bit high. No entry to the app exists other than MVC controllers and API controllers (they are same in ASP.NET Core). So checking roles in controllers in enough.


Using dtos to access data layer

Many application designs allow a direct access to database models. For example controllers action get a MyModel instance directly and pass it to dbSet or services to save it. It is dangerous because ORM dirty checking mechanism may save them in the database in mistake. In this project I used a Dto to pass data to or get data from CRUD services. So controllers are not aware of database models. It increased volume of development but I think it saves us from mysterious data updates in the database.


Second part of this post can be found here.

Software component re-use in ASP.NET and Django and is it really beneficial?

‌I am an ASP.NET developer for years. In many companies and projects that I worked for there is constant need for re-usable components. Some companies were completely serious about it so based their business on it. When I'm doing software projects on my own management (as project manager or as freelancer) I encounter this subject again.


Realistic or not realistic, it is very favorable if we could use previously developed components in new projects. It would be very time saving specially in repetitive projects. Code re-use exists in different levels of ASP.NET. You can use Html helpers or user controls to share components between projects, you also can use services like logger or authentication in several projects. There is a type of component reuse in ASP.NET that is used by modules like ELMAH that is based on HTTP modules or middle wares. None of them are a component re-use that I need. What I need is a complete set of component re-use. I need all core and UI elements all together. For example in logger example, I need core logic and all needed UI all together in a manner that I can plug into a new application so other components of the application can communicate an integrate with it. I know there is a solution for ASP.NET that is called Area that approximately do what I need. It do its re-use in view (UI) well. I just copy files into its directory. But I it no designed a really separate component. It is forced to be aware of mother application's internal functionality specially on database design. Maybe it is the reason that ASP.NET MVC area is not very popular.


I've read a lot about Django that is re-use friendly by design. I see it is based on apps. Also I see that there is an app sharing web site for it. But never used it in a real project.


By thinking more and more on software re-use (in the context of web development) I realize that not every component re-use is suitable for the application. There is trade-off here. If you want to have a re-usable app then you have to develop it as generic as you can. That itself is causing complexity and creating bug and even consumes more time for development. When you start using a component among several projects you must carefully think of every change you made in the application. Each change must be confirmed as backward compatible as others are using your app. So maintenance would be hard. Apparently this is the reason many web development teams do not employ re-usable components a lot.


There is at least one situation that this model of software re-use makes sense. When you produce a re-usable app for a limited range of projects and limited period of time and when you are intended to use your app only in a family of project, that would better suites. Here it is good that Django applications are developed in this manner by default, whether you wan to re-use it or not.

Finding a good front end solution for a semi single page web application

 I am in middle of decision making about what technique or library should I use in front end for a typical web application. Major section of this application is done with ASP.NET MVC with full back end approach. So very little front end development and Ajax calls exists except for cascading drop downs or implementing auto-complets. Every operations are done via server post backs. When you do a CRUD or other operations your request is sent to server, then the result is rendered in server then returned to client and finally is shown. With this manner, front end can not be complicated very much. Pages can not have too many elements and//or too many operations. For large operations, more than one page is needed. A page for main operation then a page for sub-operation that typically is navigated from a main list page.


But the problem began from where that some page are wanted to have more than one operation. For example a page for CRUD some models and some sub-pages in it for complementary operations. Supposing that no post-back is allowed here, we would need front end development here. Interacting with DOM and reading/updating them needs plenty jQuery code and also need few server APIs and ajax calls to them. As much as the pages goes larger and need for interaction with user gets higher, for example getting user's confirmation or opening more dialog boxes, volume and complexity of front end code increases. So need of decreasing complexity and development time arises.


Here we have 3 options. First, do not allow much front end development and handle all the application with back end MVC only. Front end pages will be simple this way. Pages can not have more than one operation and every operation will cause a post back. Number of total pages will decrease as each single operation need a separate page.


Second we can allow multi-operation pages but use no ajax calls. That means jQuery is used to opening dialog boxes and getting user data but instead of using ajax, the form is posted to the server so a post-back occurs. This technique have inflexibility because it not easy to show dialog boxes or getting confirmations from user. Everything is posted to server, then possible errors are detected then error messages are sent back to the client. Also no state can be maintained. After page is returned from server, active controls or even data that was entered in inputs will be lost. Because of these in-flexibilities this technique is not applicable very much.


Third technique is to get help from JavaScript libraries and frameworks developed for this problem. This way all functionality we need for front end, including good user interaction, low code complexity and low implement time (except for learning time and setup time) is reached. Cons is learning time, setup time and overhead that it may produce.


If we go for third solution, a good choice is to use MVC/MVVM js frameworks that are mostly used for SPAs. Our goal is also is SPA but only for some sections of the web application not all of them. Famous js frameworks for SPAs are Angular.js and Ember.js. But they are too large for our problem. So a smaller one must be selected from several comparisons including this, this and this. Form them I feel that backbone.js (MVP) and knockout.js (MVVM) are better choices. Backbone.js uses more familiar pattern that is MVP and I read somewhere that knockout.js development is slow and community is getting decreasing. So backbone.js could be final choice.


After advocating wiht my friend I decided to add a 4th solution: doing front end manipulations with pure jQuery/ajax code. This code may be lengthy but have less overhead than employing a SPA framework like angular.js or backbone.js.

Update 2

Shawn Wildermuth also did a comparison recently. Find it here.

Selecting a web framework based on reusability and pluggability of components

There are plenty comparison of web frameworks on the Internet. Many of them compare web frameworks in general, some of them compare web frameworks with performance measure. Some are comparing in learning curve, popularity, architecture, speed of development, etc.


But I am interested to focus on reusability and pluggability of components. In a web development team it is good to be able to use previously developed portions of project in new projects. For example many projects have membership or accounting section. They can be developed once and used more than once in different projects. You can even think of ticketing or organizational structure management across different separate web projects. Goal is reduce development effort during development mid level web projects.


Django is introducing itself as to be so, but how about Rails, ASP.NET, MEAN or other common web frameworks?


Django has administrative CRUD interface that can save much time during development. Dhango's moto is “the web framework for perfectionists with deadlines”. Every application in Django is consisted of apps. Each app can implement an independent field of business. Django claims that you can join different apps to to create a complete web application.


Django has good documentations but its learning curve is high. It seems efficient for database based applications.

Django is not fully object oriented. It is not as fast as Node.js but does not force you to build everything from scratch. It also seem to have less batteries than Node.js and Rails. Its job market is even smaller than Rails and ASP.NET.


Rails is opinionated, so many settings and conventions are set by default. Rails developers can learn it faster and develop with more rapidly. It is not very good at performance but have strong community. Rails is popular in Mac users while. It also is easier while deploying as cloud solutions have better support of it. Its rapid development can compete with reusablity feature that Django claims.


Previously I wrote about the subject here, here, here and here.


What is your opinions and experiences?

Using cloud storage services

As a person who is practicing remote working, freelancing and working with distributed teams, using cloud storage services like Google Drive is inevitable. Each service has its own pros and cons especially when you live in place that has serious Internet constraints and international sanctions.


Using cloud storage services helps you to be more organized and productive. You want to send a copy of a file to colleagues and after updating it, sending updated copies again? Using emails are tedious and error prone. You can put the file into a cloud storage system and share it with colleagues. So everyone have access to it. If you or your colleagues update the file, so no need to send new file to each other, everyone just get new file automatically. Cloud storage services save history of file for you, so you can see changes over time or download an older version if you want.


Cloud storage services provide good ways for notifications to collaborators when someone changes a file or adds new files to a shared space. They also keep track of conflicts and are good for alone people that just want to share their own files among their own devices like PCs, notebooks, tablets and mobile phones.


Not using a cloud storage service in a distributed team is in some degree like not using source control systems among developers of a team.


Despite all advantages of using cloud storage services, there are also disadvantages. They tend to consume huge amounts of your bandwidth. Some of them are expensive. Using cloud storage services speeds up you get confused with different operating systems, file formats, utilities, etc. You are using MS Office while others put Libre Office file formats into the shared space. You can edit a specific file in your PC but your tablet does not have an editor for it. Your MacBook machine works well with a cloud storage system while there is no suitable cloud storage client for your tablet. And don't forget that security is a big concern.

cloud storage
cloud storage

For many users specially those who has Android devices, default selection is Google Drive. It has online editors and have good integration with Android devices but its big problem is US sanctions. Due to sanctions, its desktop client for windows can not be downloaded from sanctioned countries. I guess if you download it with some work-arounds there would be still restrictions using it because APIs are still on the Internet and you must hope they are not restricted for sanctioned countries.


Another popular choice is Microsoft OneDrive. It has excellent integration with Micorosoft Office Online. Its services are many less expensive than other services. But there are some problems with it. It has no official client for Linux. There is an un-official project for it called onedrive-d, but it is not working very well. OneDrive stopped its service to sanctioned countries from October 2016.


For Linux users residing in sanctioned countries, DropBox is a good choice. It is accessible from these countries, it is easily installed on Linux and even works good with Android. Though it is an expensive service. It also have Office Online for editing files online.


If you are a Linux user using LibreOffice and wants to be able edit your files in Android and in online DropBox be careful. There is no Android version and no Online version of LibreOffice. You have to save your files in MS Office format and use MS Office for Android and MS Office Online.


Developing an ASP.NET Core project on Ubuntu

It's a long time that I'm trying to migrate from Windows/.Net to Linux. But job market have more demands on .Net other than none Microsoft technologies. My hope was on ASP.NET Core. It was not very good at the Beta/RC time but now that is published in final version, my mind has been changed. It now works good on Ubuntu.


At start I created several test projects with ASP.NET Core on both Windows and Ubuntu. I wanted to be sure that ASP.NET Core is capable enough to rely on it in a real world project. I tried to test all aspects of MVC, Web API, Razor, Nuget packages, EF Core, Identity Core, internal DI/IoC, project.json, SQLite, unit testing and tooling include Visual Studio Code, debugging, unit testing, auto complete, code formatting, shortcuts, etc. I tested them on a Windows machine and on an Ubuntu machine with both Unity and Xfce desktops. All tests showed that I will not encounter a big problem utilizing ASP.NET Core itself as first place and using it in Ubuntu at second place.


Using .Net Core in Ubuntu is as same as Windows except that you rely more on terminal than GUI. Using dotnet commands are exactly same in both Windows and Ubuntu. Same names, same switches, same operations and same outputs. It was really good thing that they are exactly same. But when it comes to tooling it is different. While your IDE in both platforms is Visual Studio Code they do not differ very much but if you are used to use Visual Studio 2015 then you can understand how deep is difference between Visual Studio Code and Visual Studio 2015. The latter have a full integration and do everything you need with just pressing some shortcuts. But Visual Studio Code needs many configurations to behave like Visual Studio 2015. Debugging in Visual Studio Code is not as easy as Visual Studio 2015, you need some level skill of operating system to be able to debug codes in Visual Studio Code. My good friend from Visual Studio 2015, IntelliSense, was not working at all at first days. Now that it works, it does not work as good as in Visual Studio 2015. It shows many un-related items too. BTW having Visual Studio Code in Ubuntu is like a miracle. It is very similar to Visual Studio 2015, it has code highlighting, similar shortcuts (Unity desktop), good integration with git, real time compile (to show errors in the code), etc. Did I mentioned that you can use yeoman as a substitution of Visual Studio 2015 templating system that is absent in Visual Studio Code?


The project I am working on it in Ubuntu is a regular web application with parts rendered via MVC and parts delivered to a mobile app as Web API. In development environment I use SQLite as database backend but for production we will be using MS-SQL Server. EF Core works good despite its constraints in verison 1. SQLite in other part also works good as development environment database. It does not allow complete support of EF migration but instead works same from Ubuntu and from Windows. One thing that works great is that the code I'm working on is working exactly same in Windows and Ubuntu. I change code in my Ubuntu machine, commit and push them to the server then pull it from a Windows machine and then continue my development from my Windows machine, no matter I have switched from one OS to another OS. Code behaves same in both OSes and runs exactly same. Additionally code can be developed and run in both Visual Studio Code and Visual Studio 2015 as same. The only consideration is directory structure be designed compatible with structure that Visual Studio 2015 knows about.


I haven't yet deployed the project into a Linux machine as our client probably prefer to use Windows for it. But I hope hosting would not be a problem too. As a long term .Net developer I am very exited about cross platform feature that Microsoft has been added to .Net but frankly I'm a bit worry about it. I'm not sure if Microsoft would continue this way on Linux or not. I am afraid that developers using .Net in Linux would not be as big as Microsoft imagines then abandon it.


I started developing ASP.NET Core on Ubuntu on Unity (standard desktop with Ubuntu). Everything was good except high CPU usage. This problem was not caused by ASP.NET Core instead it was caused by 'hud service' from Unity. For this reason I decided to try Xfce also. It is a light desktop that does not have hud service high CPU usage problem but have problems of its own kind. First thing you encounter is that shortcuts are very very different than Unity. I lost even my Ctrl+F3 (for searching keywords in Visual Studio Code). In rare situations it has problems with high CPU usage of Visual Studio Code (OmniSharp) but bigger problem is regular crashes of my applications like Toggl, my favorite time tracker, and even Google chrome. Though I'm still using Xfce but think I would soon switch back to Unity and find another solution for hud service high CPU usage.


Please see some pictures of my experiences:

asp-net-core-debug-ubuntu-xfce omni-sharp-high-cpu-usage xfce-chrome-crashes