Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

Running Microsoft .NET applications on the AWS cloud

Those familiar with Microsoft .NET applications undoubtedly lean towards Azure when it comes to cloud computing. If anyone knows how to run a .NET application, it must be Microsoft, right? Rob van Pamel, a .NET developer since 2007, was also one of those Azure fanboys. However, things changed for him in 2015, as Rob shared in a keynote with his colleagues.

Share article
.NET

"Our team was about to start a new ticketing project. Since it was a greenfield project, we had the freedom to choose anything, as long as it was cost-effective. One of the consultants suggested AWS, so we delved into it to make an informed decision."

Rob Van Pamel, .NET Solution Architect

The choice to combine .NET with AWS may not be obvious at first glance, but there are several important factors in favor of Amazon Web Services. A wide range of top-tier customers, from Netflix to NASA, constantly push the boundaries of technology, leading to continuous improvements. All AWS users benefit from this: more than 90% of new features originate from direct feedback from their customers.

These features and products are highly comprehensive and focus on computing, storage, and databases, among other areas. They are also suitable for mobile or IoT applications and, as Rob discovered, are superior to those offered by Microsoft. "Amazon is one of the main Corporate Sponsors of the .NET Foundation. AWS has an engaged community, with developer advocates and active user groups."


Schermafbeelding 2022 07 27 om 10 10 40

How to get started?

Creating an account on AWS is free; you only pay for the services you use. Once you've gone through that initial step, you'll land in the AWS Management Console. This is your home base where you'll find all the services.

In his keynote, Rob also provided a demonstration on how to turn a local .NET application into a cloud-native application. He did this by outlining five different paths developers can take when leveraging AWS. Some paths are easier than others, depending on how far along your application is. For this, he delved into a number of tools that come in handy during such a migration, such as the API Gateway, Serverless Lambda functions, and the NoSQL DynamoDB.

Since many developers use Visual Studio, Amazon developed the AWS Toolkit. This plugin makes migration easier and offers various services, products, and project templates. The AWS Toolkit is available for Visual Studio, Visual Studio Code, and Rider.

Step 1: Migrate data from a local machine to an RDS database in the cloud

In his first demonstration, Rob showed how to adjust the application to use a database on an RDS Server instead of an on-premise database server. "I mentioned that AWS is open-minded: for every major database vendor, you can also run a database variant on RDS," he said. "Running an RDS database instead of an on-premise database is especially interesting for companies looking to reduce operational workload. With RDS, you don't have to worry about operational tasks like patching or taking backups. AWS takes care of that for you!"

In the AWS Explorer, you can easily launch a new Instance of an RDS Database server. You have a choice of different database engines, from MySQL to Oracle. Amazon also has its own engine, Amazon Aurora, which even has a serverless version. Once you've chosen an Instance type, you give the database an identifier and credentials. Then, you make it publicly available and assign the correct security group. At this point, your database is running in the cloud and ready to use!

Before our application can use our database, the correct structure must be created. Using Azure Data Studio, we create our database on the new RDS server. Since our application uses Entity Framework, it's sufficient to execute the Entity Framework database migrations. For this, we simply adjust the connection string and execute the EF command Database Update.

Step 2: Migrate the API to the cloud

At this point, our database is in the cloud, and the next step is to move our API from on-premise to the cloud. The advantage of this is that you can easily scale up by upgrading existing instances in the cloud. You can also scale out by adding new instances. This allows you to adjust your server capacity to your current workload, not the workload you expect to have in a few years. The same applies in reverse: when you have less workload, you can perfectly scale down or up. Try doing that on-premise!

In addition to the various memory, CPU, and networking capabilities within EC2, you also have various classifications. If you have a workload that requires a lot of memory, you can choose an EC2 type with an instance classification that specializes in that, in this case, the R version. If your application requires a lot of CPU compute, choose the C-specific EC2 classifications.

Several terms come into play when migrating from on-premise to cloud. The Amazon Simple Storage Service, or S3, is a high-quality object storage service that you can compare to Blob Storage in Azure. You can use S3 for any blob storage, hosting your single-page applications in combination with CloudFront, or building data lakes.

Another concept is EC2, or Amazon Elastic Compute Cloud. This web service allows you to configure computing capacity in the cloud in a simple way. When used, virtual machines are started that use Windows or Linux in the cloud.

To bring our application to EC2, we use the orchestrator Elastic Beanstalk. This orchestrator, in combination with S3, will deploy our application to EC2.

At this stage, our application still uses a secret file for the username and password. Because this secret file is not available in the cloud, we will temporarily add the user ID and password to the app settings. Later, this will be corrected with another service. Once you've set the correct instance type and entered all the data, you can deploy your application to Elastic Beanstalk. Your .NET application will now be published via the .NET publish command. Once everything is zipped, it will move this data to S3. From there, it will use this data to start your application in a virtual machine.

In the console, which is pretty much the home for everything you do in AWS, you'll also find Elastic Beanstalk. There you'll find your application and all associated data. Whenever you upload something, a new version is created. To securely store our user ID and password, we use the Parameter Store. This allows you to store data in a secure location.

You'll find this in the Systems Manager of your console. Here you can create various parameters, such as the User ID or password. Once that's done, you can use them in your application by modifying the configuration in your program file to read the Parameter Store. Then you can redeploy to Elastic Beanstalk. Linking to the Systems Manager is just a matter of a few clicks: adding the correct prefix and configuration to your application.

Step 3: Containerize your API

Another option is to use containers for our application. "By containerizing your API, your application has a faster spin-up time, and resources will be better utilized," said Rob. "Another advantage is that our application is less dependent on certain operating systems and their configurations. How often does it happen that an application behaves differently in the production environment than in the test environment because that one parameter is set differently in the test than in the production? By using containers, we can minimize these kinds of risks."

To rebuild our application, we need at least 2 functions: Container Registration and container orchestration. The first is called AWS ECR, or Elastic Container Registry. You can compare it to, for example, Docker Hub, but for private docker images.

The second function is called AWS ECS, which stands for Elastic Container Service. With this, you orchestrate your containers: you start, restart, and scale them. AWS ECS has three main components: Tasks, Services, and Clusters.

With a Task, you can start a docker container based on a Task definition, a Service is an automated way to do that and then scale it. Clusters, on the other hand, are bundles of services and tasks. Most applications run fine on ECS, but in the slightly more expensive Kubernetes Service AWS EKS, you have even more configuration options, making it even more powerful.

When migrating our application to container instances, you add a load balancer for your application. This will stand in front of the application, so scaling out to multiple instances is immediately supported. Creating a docker container is standard available in Visual Studio. The generated Dockerfile describes how the application should be built. Once the Dockerfile is added, you can publish the container to AWS. In the accompanying wizard, you indicate whether it is a Service or a Scheduled Task.

Creating the Load Balancer also happens in the wizard where you enter the correct settings and permissions. When publishing your application, the toolkit will zip it and create your container. An image for your container is created and uploaded to AWS ECR, after which the ECS cluster is configured. The Services and Tasks you entered will also be created. After this, you're ready!

In your AWS Management Console, you'll see that things have changed. In your Elastic Container Service, for example, you'll see that a new Service has been defined. In the Log Configuration, you'll see some defaults that have been applied. This Log Configuration is connected to CloudWatch, the log provider for your application. Every output your application makes is forwarded to your logs on CloudWatch. It's a very simple way to see what your application is doing.

Step 4: Migrate to serverless in the cloud

At this point, a container, connected to a relational database, is running in the cloud. The next step is serverless computing. Thanks to AWS Lambda, you have a faster boot time and no longer have to worry about containers or updating security patches for .NET Core. In this scenario, every event triggers your application. Therefore, you need, for example, an API gateway that can trigger AWS Lambda.

You only pay for every millisecond your application runs, and you can scale indefinitely. The downside to this solution, however, is that it's more difficult to test, because you have many more moving parts in your application's architecture. "Your application also becomes more scalable because you only deploy the functions. If errors occur on your container, they are better isolated from the rest of your application."

To migrate your application, you'll need to place an API gateway in front of it so that the Lambda function is available to the frontend application. The easiest way to rebuild the application is to start a new Serverless project. The toolkit has many templates that you can use for this. In his example, Rob used the .NET API. This will add a Lambda Entry Point and a serverless template, which you can copy to your application.

By giving the Lambda Entry Point the same configuration as your program, it can intercept all API calls from your API gateway and forward them to your application. You do this by modifying the handler in the Serverless Template so that it knows where to find your application. Then give the correct policies and change the project type. This way, it can be published as a Lambda function. Once this is done, you can publish to AWS Lambda and have a serverless application!

Step 5: Use cloud-native storage.

If you have a highly scalable application, we should also try to avoid all bottlenecks. If we have a lot of load, the database can quickly become such a bottleneck, and that problem is difficult to solve. Another step we can take to make a scalable application is to use DynamoDB, AWS's NoSQL database @ scale. This feature is hyperspeed, has key-value and document data models. It is compatible with PartiQL, there is transaction support, and it works great with microservices, mobile backends, and serverless applications. By working in this way, your data will always be available, even under heavy workload.

DynamoDB is divided into tables. Each consists of a Primary Key that determines in which partition your data is stored. This is the decisive factor that allows DynamoDB to work so fast. If you can't access the data you want with your Primary Key, you can still use Secondary Indexes that copy the data into another partition.

Queries are used to retrieve data based on the partition key. Another option is to use the scan operation, but it is much slower and more expensive. Here, all partitions are scanned one by one until the relevant data is found.

DynamoDB also offers streams. These could be compared to triggers in a relational database. In this case, Lambda functions will execute associated logic if data is added, modified, or removed.

The last step you need to take is to move your application from a relational database to a NoSQL DynamoDB database. You do this by creating the database in the console. Within Visual Studio, you can access the data and your tables via the AWS toolkit. Through the accompanying NuGet Packages, you adjust the application to read the data from DynamoDB.

A match made in heaven

Although Microsoft .NET and AWS may seem like an unlikely couple at first glance, they are a match made in heaven. The speed at which Amazon innovates is decisive: more than 3 new features are added daily. Rob, a long-time Azure fanboy, is completely convinced after his practical experiences!

Do you like to stay informed about the latest Insights?

Rob Van Pamel

Rob Van Pamel

Curious about more software development insights?

Check out the insights from Kenny Laevaert, .NET consultant at Axxes, here.

Axxes