Things I Wished I Knew About DevOps Practices and Cloud Technologies When I Started my First Role in Tech

Things I Wished I Knew About DevOps Practices and Cloud Technologies When I Started my First Role in Tech

It’s 2021 and I’m just over a month into my third role as a Software Engineer & Tech Coach. It’s been a whirlwind of a journey so far! Here’s some things I wished I knew about DevOps practices and cloud technologies when I started my first role in tech.

My role wasn’t just about full-stack Software Engineering in C#, but also involved DevOps practices and Cloud technologies

During my career switch into tech, I thought that DevOps practices and Cloud technologies were utilised solely by DevOps Engineers and Cloud Engineers. I under appreciated how much of my role involved DevOps practices and Cloud.

When I spoke to people in my network and especially those who have recently started their first roles in technology; it seemed like there was a mixed bag. Some people were not involved in DevOps and Cloud at all, though they mentioned some of their colleagues were. Others, like myself had more of a hybrid role and some people were doing DevOps and Cloud every single day!

What is DevOps in a nutshell?

AWS states, “DevOps is the combination of…philosophies, practices, and tools that increases an organisation’s ability to deliver applications and services…”. The infrastructure and process that sits behind software ensures a smoother experience for building code, testing it, shipping it out and monitoring it.

DevOps and Cloud is there to help Developers

Some Software Engineers would say that DevOps and Cloud is not part of their role, so why should they bother; they do have a point there. It’s a massive world, recently, product offerings like AWS Amplify for example, help those who major on the front-end and API domains build mobile/web apps quickly. However, there’s value in learning some of the key concepts on how DevOps and cloud is helpful.

In my first role in tech, I wanted to learn some fundamentals of DevOps and Cloud that would support me in my role as a C# Full-Stack Software Engineer.

In my team at the time, one of the projects we were tasked with was re-writing a legacy Excel application into a .NET Core 3.1 C# web application (at the time of writing this post, it’s .NET 5). I really liked the way my team worked together on this, all the developers/testers, business analysts, our product owner and scrum master mobbed on this.

Something popped into my head at the time: “Why can’t we just build the web application and then just deploy it to production for the users, easy right? I can just click around on the Azure Portal and just manually make my resources there and then manually deploy.”

Well, when we started mob programming on the cloud infrastructure process, I realised there was more to just ‘making something work’.

Automated Continuous Integration & Continuous Deployments Using Azure Repos & Pipelines

One of the things that stuck with me was CI/CD (Continuous Integration / Continuous Deployment). According to the AWS DevOps blog, “An integral part of DevOps is adopting the culture of continuous integration and continuous delivery/deployment (CI/CD), where a commit or change to code passes through various automated stage gates, all the way from building and testing to deploying applications, from development to production environments.”

I got to appreciate this by learning about git, git repositories on Azure repos, managing branches and creating pipelines to build and deploy our C# solution.

During my learning process, I had a sneak peak at how different teams were utilising Azure Pipelines. At first I was hard-coding things in and this sort of worked, but then I found myself copying and pasting all the time. I then realised parameterisation was helpful to ensure I could supply different values for the same pipeline variables. This helped me as a developer and for other developers on my team because it meant we could replicate the same setup across the development environments, testing environments, pre-production and production environments of the pipeline. We could configure things to be switched ‘on’ and ‘off’ through code.

Separation of concerns was important here. We decided to go with an infrastructure pipeline and an app pipeline. If there were changes to the web application on a branch, CI/CD will automatically detect this and trigger a build and deploy onto the relevant environments using the relevant pipelines. Test suites would also run automatically too. Once the Pull Request (PR) for the branch has been approved and merged, the CI/CD pipeline will build and deploy to the environments. No more arduous manual deployments that we had to deal with for the original Excel application! Great!

Infrastructure-as-Code

During my first role, I realised that clicking around the settings on the Azure Portal to create and configure resources was helpful for me, but not helpful for others. It wasn’t repeatable. We had to think as a team how we can define the infrastructure and configure it using a better approach. This was where the Azure Resource Manager (ARM) templates came in handy. It enabled the definition of what infrastructure we wanted to make, how we wanted to make it and configure it.

The ARM templates were useful as they could be version controlled through git as well; just like we would version control code. There were also helpful extensions on Visual Studio for structuring and validating these templates.

Most importantly, it enabled a repeatable and testable process for our infrastructure.

Logging & Monitoring

So why do we need logging & monitoring? Let me put it this way, when you release a new feature for your product, that’s just the start. Just as a plane has a suite of telemetry to record readings from instruments; it is the same concept for software to ensure everything is operating as it should. Try to think where logging and monitoring makes sense for you.

We used Azure Monitor to add observability into our applications, infrastructure and network.

Final Thoughts

This is just the surface of what DevOps and Cloud technologies can offer to developers, of course there are specialists who go a bit deeper into more concepts that those I’ve covered here. If you are working in tech, there is some benefit to learning some of the fundamentals about the infrastructure and process that sits behind software to ensure smoother experiences for building code, testing it, shipping it out and monitoring it.

Hey Kim, what’s it like being a Software Engineer & Tech Coach? Q&A Session

Hey Kim, what’s it like being a Software Engineer & Tech Coach? Q&A Session

How did you become a Software Engineer & Tech Coach?

I didn’t plan this path; it totally happened by accident! 😂

It was only back in February 2019 that I received a fully-funded scholarship to attend a 16-week intensive Software Engineering Bootcamp at Makers. I was a career switcher, having at the time spent over 4 years in sustainability and business consulting roles.

I have since been exposed to full-stack Software Engineering and DevOps practices from a range of roles and industries such as investment management, e-commerce and tech education.

While in my Software Engineer role at Trainline, which is a FTSE 250 rail and coach ticketing platform, a random advert popped up in my LinkedIn feed in December 2020 and it was for a Software Engineer & Tech Coach role at Tech Returners.

When I read the job advert, negative thoughts started coming to my head:

😥 “Am I doing the right thing? Is it too early in my tech career to do this? I’ll be leaving a FTSE 250 company, will I regret it?”

😥 “Am I even qualified for this? There’s only some technologies on the job description I know well, those I know enough to get by and those where I don’t have a clue yet!”

Somehow because these thoughts came into my head, I wanted to pursue this more than ever! I tried to map things out rationally and thought about what I enjoyed doing, which was my experience teaching people to code and creating workshops for the community alongside my friends, speaking and mentoring work. However, I still wanted to keep on being an active Software Engineer, so the role was a great blend for me.

🙌 I applied for the role, did my 2-minute elevator pitch video, had my interviews and landed the job! 🙂

What do you do as a Software Engineer & Tech Coach?

It’s been just over a month since I started my role as a Software Engineer & Tech Coach at Tech Returners – whoop whoop! 🙂 It’s a hybrid role which means I get to do tech coaching and software engineering.

As a Tech Coach, I help to deliver programmes to upskill individuals at mid-senior levels in technology. Since learners on the programme have prior tech experience, it means I have the opportunity to explore tech concepts in a bit more depth. I’m currently leading sessions, helping with seminars on tech topics, having 1:1s with learners/pair programming with them, recording short videos and providing detailed code review feedback. I onboarded remotely and went straight into all the action. By Day 3, I was already delivering some sessions!

💜 I remember my first week watching in awe as the other Tech Coaches, James, Ellie and Heather did their thing! They conducted their roles with care, precision and best practice; I honestly wondered why people hadn’t heard of Tech Returners before.

One of my goals is to design and develop a 5-star curriculum to really innovate tech education. As a Software Engineer, I’m working on internal projects across the full software development lifecycle. There’s a project I’m working on which is totally brand new, a great chance for me to be involved with a product from scratch.

How do you go about teaching technical concepts? What techniques do you use?

👩🏻‍💻 I use plenty of visualisations

👩🏻‍💻 I explain tech concepts using analogies and relate them to real-life things

👩🏻‍💻 I record short videos/screen recordings to walkthrough tricky technical concepts, provide thorough code review feedback and help with debugging strategies

👩🏻‍💻 I conduct 1:1 and group video calls to host sessions, webinars and provide technical and wellbeing support

👩🏻‍💻 I try to start from the core principles and break down technical jargon as much as I can to make it sound less daunting. Some technologies like git version control use scary words which create a barrier to learning; even though the technology can be very powerful.

How do you balance learning & coaching?

I create and deliver programs to help underrepresented people refresh & upskill in the Software Engineering domain after a career break. I also deliver programs to upskill engineers at existing companies in all things DevOps!

That’s a lot of technologies! How can you keep up?

I would say I’m aiming for a T-shaped skillset. This means I have deep expertise in few technologies with supporting, but less-developed skills in others. For example, I am more backend/cloud focused with my primary language being C#; but if I get asked a question about React components and how to test components, though that’s not my area of expertise, I’m able to conduct some research, put the pieces together or reach out to other tech coaches to put technical recommendation together.

Over time, I develop a skill for spotting patterns in code quickly whatever the tech stack/languages used. Learners think I do some magic! The reality is, I don’t, it’s patterns I see again and again which helps me to spot things quickly.

Developing myself while teaching others

I listened to the egghead.io developer chats podcast episode featuring Ali Spittel on Developing Yourself While Teaching Others and I found so much inspiration from Ali Spittel’s journey.

Through going through the cycle of learning & coaching, I found myself solidifying my understanding of tech concepts and technologies at a faster rate than if I were to learn without teaching others. Before being a Tech Coach, I would become impatient and skip over a tech concept quickly just to ‘make things work’; I am now focusing deeper and with higher precision with my learning to enable me to provide the best technical coaching.

Since I’m not writing production code as often, I set some time aside (25 mins to 1 hour) in the morning before my work commitments to develop myself by building my own projects, practicing my coding skills or researching technical concepts. I don’t code on the weekends and in the evenings after 6pm because I find it’s important to have some time off. I’m trying to learn how to be a more effective and efficient learner every day. I also repeat technical concepts again and again, rather than moving on too quickly.

What do you enjoy most about being a Software Engineer & Tech Coach?

I love seeing others learn and grow in their technical skills and confidence. It’s not just about the technical journey, but the human one too.

I also really like the challenge of finding new ways to explain technical concepts and technologies in digestible ways. I like the feeling I get when I get asked a question from the learners and I have the opportunity to go in and explore for myself.

I like pair programming and mob programming with the other Software Engineers & Tech Coaches so we can all learn together and continue to innovate tech education.

What would you say are the most challenging aspects of your role?

From a technical standpoint, there’s times where I doubt myself and my abilities and I start to think: “What if I get caught out?” “What if I get asked a question and I don’t have a clue how to answer it yet?” “Surely, I’m the tech coach and I should know everything right?” I always have to remind myself about my T-shaped skillset and that I don’t have to be an expert in everything.

From an emotional standpoint, I have a duty of care for my learners, which means I provide support from a wellbeing standpoint and ensuring I listen to my learners and help them find ways to move forward and reflect themselves. Therefore, I have to be more disciplined with the way I use my time more than ever, so that I can focus on providing the best support possible; whilst also making sure I take care of myself and prioritise my own time for my learning before I can support others.

How are you continuing to develop yourself? What’s in store for the future?

For my T-shaped skillset, I decided that I would focus on C# as my primary language. In terms of tech stack, I’m focusing on the backend and DevOps side of things. I’m not a specialist in HTML, CSS and React, though I’m able to work with it as best as I can.

I love creating content, designing, developing and innovating tech education, so would love to continue to create workshops for the community and do some public engagements around technology, such as my most recent collaboration with The National Museum of Computing and the Codebar Festival.

Thanks for reading! 🙂

C# Repository Design Pattern for Database Operations in a .NET Core 3.1 MVC Web App

C# Repository Design Pattern for Database Operations in a .NET Core 3.1 MVC Web App

Introduction

When building applications, it is important to consider how and where you’re conducting database operations.

Entity Framework Database Context (DbContext) and the Controller

Building a basic template for a .NET Core 3.1 application using a scaffolding approach like the one from this Microsoft tutorial is a great starting point. Firstly, let’s have a look at a small code snippet generated from the scaffolding.

In this example, the PusheenController class has actions for CRUD (Create, Read, Update and Delete) operations against the database. Here, we are directly interacting with the Entity Framework DbContext class called PusheenCustomExportCsvContext and retrieving data about Pusheens from the database. The PusheenCustomExportCsvContext is injected as a dependency into the PusheenController. In this web app, dependencies are added to the service container in the ConfigureServices method in Startup.cs.

However, it is easy to end up with big controllers; big in the sense that there’s a lot of database operations logic built into the controller. Since the DbContext is a dependency of the controller, a further issue faced is if you were to test this, you would have to mock the DbSet and DbContext. It is definitely achieveable to mock we like; but we would have to mock the Provider, Expression, ElementType properties and GetEnumerator() method.

In larger applications, we would like to separate the concerns out into layers that are responsible for the business logic, presentation, database etc.

Example 1: DbContext and PusheenController

//Code omitted for brevity 🙂
namespace PusheenCustomExportCsv.Web.Controllers
{
    public class PusheenController : Controller
    {
        private readonly PusheenCustomExportCsvContext _context;

        public PusheenController(PusheenCustomExportCsvContext context)
        {
            _context = context;
        }

        //Code omitted for brevity 🙂

        // GET: Pusheen/Details/5
        public async Task<IActionResult> Details(int? id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var pusheen = await _context.Pusheens
                .SingleOrDefaultAsync(m => m.ID == id);
            if (pusheen == null)
            {
                return NotFound();
            }

            return View(pusheen);
        }

        // GET: Pusheen/Create
        public IActionResult Create()
        {
            return View();
        }

        // POST: Pusheen/Create
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Create([Bind("Id,Name,FavouriteFood,SuperPower")] Pusheen pusheen)
        {
            if (ModelState.IsValid)
            {
                _context.Add(pusheen);
                await _context.SaveChangesAsync();
                return RedirectToAction("Index");
            }
            return View(pusheen);
        }

What is the goal of the Repository Design Pattern and why is it useful?

Let’s assume we would like a presentation layer made up of controllers and views, and a service layer for the business logic and database operations. We can create a repository (in this case lumped into the service for simplicity) where our database operations and logic can sit.

The repository is in charge of interacting with the Entity Framework DbContext class, so the controller doesn’t have to.

Repository: Defining and Implementing the Interface

In this interface, we define an IPusheenService and implement the interface in the PusheenService.

Example 2: IPusheenService

//Code omitted for brevity 🙂
namespace PusheenCustomExportCsv.Web.Services
{
    public interface IPusheenService
    {
        Task<List<Pusheen>> GetAllAsync();
        Task<Pusheen> Create(Pusheen pusheen);
        Task<Pusheen> Update(Pusheen pusheen);
        Task<Pusheen> Delete(Pusheen pusheen);
        Task<Pusheen> FindPusheenAsync(int? id);
        Task<Pusheen> FindPusheenById(int? id);
        bool PusheenExists(int id);

    }
}

Below is an example of how PusheenService implements FindPusheenAsync and FindPusheenById. These database operations were originally coded directly into the controller as we saw in Example 1.

Example 3: PusheenService

//Code omitted for brevity 🙂

        public async Task<Pusheen> FindPusheenAsync(int? id)
        {
            var pusheen = await _context.Pusheens.FindAsync(id);
            return pusheen;
        }

//Code omitted for brevity 🙂

        public async Task<Pusheen> FindPusheenById(int? id)
        {
            var pusheen = await _context.Pusheens
                .FirstOrDefaultAsync(m => m.Id == id);
            return pusheen;
        }

//Code omitted for brevity 🙂

Let’s see what our controller looks like now. The key difference is that the PusheenController is a lot slimmer and we don’t need to interact with the DbContext directly anymore; that’s the job of the repository now! 🙂

Example 4: PusheenController

//Code omitted for brevity 🙂
namespace PusheenCustomExportCsv.Web.Controllers
{
    public class PusheenController : Controller
    {
        private readonly IPusheenService _pusheenService;

        public PusheenController(IPusheenService pusheenService)
        {
            _pusheenService = pusheenService;
        }

//Code omitted for brevity 🙂

        // GET: Pusheen/Details/5
        public async Task<IActionResult> Details(int? id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var pusheen = await _pusheenService.FindPusheenById(id);

            if (pusheen == null)
            {
                return NotFound();
            }

            return View(pusheen);
        }

        // GET: Pusheen/Create
        public IActionResult Create()
        {
            return View();
        }

        // POST: Pusheen/Create
        // To protect from overposting attacks, enable the specific properties you want to bind to, for 
        // more details, see http://go.microsoft.com/fwlink/?LinkId=317598.
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Create([Bind("Id,Name,FavouriteFood,SuperPower")] Pusheen pusheen)
        {
            if (ModelState.IsValid)
            {
                await _pusheenService.Create(pusheen);
                return RedirectToAction(nameof(Index));
            }
            return View(pusheen);
        }
//Code omitted for brevity 🙂

Final Thoughts

I hope you find this post useful and being part of my coding journey! Thank you for reading my blog! 🙂

You can find the link to my Github repo with the simple web app example here.

C# Unit Testing a Custom FileResult That Exports Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

C# Unit Testing a Custom FileResult That Exports Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

Introduction

In the past two months at work, I was tasked with learning C#, as well as creating a web app using the .NET Core 3.1 MVC framework. I wanted to document the most interesting concepts in a series of blog posts.

In my last blog, I demonstrated how to create a custom FileResult to export data into a CSV file using streaming in a .NET Core 3.1 MVC web app. In this follow on blog post, I will show you how to unit test the custom FileResult and the controller which produces this FileResult.

Again, the actual code was more complex; this blog was my attempt to abstract the core concepts into a simple web app using Pusheen the Cat as a fun example!

Unit Testing Custom FileResult With Streaming in .NET Core 3.1

In my last blog, I had a custom FileResult called PusheenCsvResult. To set the scene for unit testing, I used the NUnit testing framework, along with the FluentAssertions and FluentAssertions.AspNetCore.Mvc libraries, which provided a clear way to communicate what I was asserting in my test (i.e. what was the expected result). I applied the Arrange, Act, Assert structure for this and am still learning the best way to do it!

P.S. Would highly recommend the book Agile Technical Practices Distilled: A learning journey in technical practices and principles of software design.

What was the goal?

Let’s start off with the goal! When we’re working with unit testing, it’s helpful to define what it is we’re checking for. In this unit testing situation, I wanted a way to check that the PusheenCsvResult’s method ExecuteResultAsync was streaming (writing) the response correctly to the HttpContext’s response body.

How did I go about doing it?

Knowing this, I followed some tips in the Agile Technical Practices Distilled book to start from the assertion and work backwards (Assert, Act , Arrange). I didn’t just magically know what I needed; it took some time to get there.

Setting it up

I created a [TestFixture] for testing PusheenCsvResult and within the [SetUp], I defined a _httpContext_fileDownloadName and made a fake _fakeActionContext object.

The reason I did this was because the PusheenCsvResult’s method ExecuteResultAsync, took a parameter which was of type ActionContext.

Let’s recap on my last blog for a second, the job of ExecuteResultAsync is using StreamWriter to write to the response body of the HttpContext of the ActionContext and the stream sits in between the application and in this case the response body. The data is written to the stream (response body) and then from the stream, it then results in the CSV file being produced.

In the case of the unit test scope, I wanted to create a _fakeActionContext object as an instance of ActionContext and in it’s constructor, I set the HttpContext as the _httpContext I defined earlier in my test [SetUp]. This enabled the ability to check what was written to the response body of that _httpContext.

#Code omitted for brevity

    [TestFixture]
    public class PusheenCsvResultShould
    {
        private PusheenCsvResult _pusheenCsvResult;
        private string _fileDownloadName;
        private string _expectedResponseText;
        private DefaultHttpContext _httpContext;
        private ActionContext _fakeActionContext;

        [SetUp]
        public void Setup()
        {
            _httpContext = new DefaultHttpContext();

            _fileDownloadName = "pusheen.csv";

            _fakeActionContext = new ActionContext()
            {
                HttpContext = _httpContext
            };
        }
        
        [Test]
        public async Task GivenActionContext_ExecuteResultAsync_ShouldWriteLineToHttpResponseBody()
        {
            
            //Arrange
            var data = new List<Pusheen>()
            {
                new Pusheen() { Id = 1, Name = "Pusheen", FavouriteFood = "Ice cream", SuperPower = "Baking delicious cookies" },
                new Pusheen() { Id = 2, Name = "Pusheenosaurus", FavouriteFood = "Leaves", SuperPower = "Roarrrrr!" },
                new Pusheen() { Id = 3, Name = "Pusheenicorn", FavouriteFood = "Butterfly muffins", SuperPower = "Making rainbow poop" }
                
            }.AsQueryable();

            PusheenCsvResult _pusheenCsvResult = new PusheenCsvResult(data, _fileDownloadName);

            _expectedResponseText = System.IO.File.ReadAllText(TestContext.CurrentContext.TestDirectory + @"/TestData/expectedCsv.txt");

            var memoryStream = new MemoryStream();
            _httpContext.Response.Body = memoryStream;

            //Act
            await _pusheenCsvResult.ExecuteResultAsync(_fakeActionContext);
            var streamText = System.Text.Encoding.Default.GetString(memoryStream.ToArray());

            //Assert
            streamText.Should().Be(_expectedResponseText);
        }

    }

Let’s hop over to the test

For the [Test] itself, I checked that given an ActionContext, the method ExecuteResultAsync should WriteLine to the HttpContext response body.

I needed a PusheenCsvResult for my test, and that took 2 parameters for it’s constructor.

  1. data (as type IQueryable<Pusheen>)
  2. fileDownloadName (as type String)

I already defined fileDownloadName earlier in the [SetUp], so the next step was to make some data for the test scenario. In this case, a new List<Pusheen> as .AsQueryable() was created and I passed this into PusheenCsvResult’s constructor.

Based on this information, I wanted to make a file containing the expected text I would expect to see as _expectedResponseText. In my assertion, I checked that the text I got from the stream should match the _expectedResponseText for the test to pass.

Now, this was the tricky bit – how to deal with closed streams?

When I was testing this, I didn’t know what was wrong, as the test kept saying that it couldn’t access a closed stream. Since I defined the StreamWriter within a using block; the stream will be closed once it’s done its job. This was not a bad thing and is something I recommend doing in your implementation; but it meant I needed another way to access what was written to the stream for the purpose of the unit testing (in this case, the stream was the HttpContext’s response body itself).

I added some comments on the code snippet below to describe what was going on.

 // I create a new Memory Stream and set that stream as the Response Body of the _httpContext I'm using in my unit test scope
var memoryStream = new MemoryStream();
_httpContext.Response.Body = memoryStream;

//Act
// I await and pass the _fakeActionContext to my ExecuteResultAsync method. Reminder that I pointed the HttpContext of ActionContext to the _httpContext I made for testing
await _pusheenCsvResult.ExecuteResultAsync(_fakeActionContext);

//I need to make sure that I capture the contents of the memoryStream and store it against the variable streamText which I can access later in my assertion
var streamText = System.Text.Encoding.Default.GetString(memoryStream.ToArray());

Unit testing the Controller

The controller was a bit more straightforward. I used Moq to mock the PusheenService and its method GetAllPusheens() to return some data.

_mockPusheenService.Setup(p => p.GetAllPusheens()).Returns(data);

Here, I tested that ExportCsv on the PusheenController returned the result of type PusheenCsvResult and that the fileDownloadName and contentType were correct.

#The rest of the code has been omitted for brevity! 🙂

namespace PusheenCustomExportCsv.Tests.Controllers
{
    [TestFixture]
    public class PusheenControllerShould
    {
        private PusheenController _controller;
        private Mock<IPusheenService> _mockPusheenService;
        private Mock<IConfiguration> _mockConfig;
        private DbContextOptions<PusheenCustomExportCsvContext> _testDbOptions;
        private PusheenCustomExportCsvContext _testDbContext;
        
        [SetUp]
        public void Setup()
        {
            _mockPusheenService = new Mock<IPusheenService>();
            _controller = new PusheenController(_mockPusheenService.Object);
        }
        
        [Test]
        public void ExportCsv_Returns_CsvResult()
        {
            //Arrange
            var data = new List<Pusheen>()
            {
                new Pusheen() { Id = 1, Name = "Pusheen", FavouriteFood = "Ice cream", SuperPower = "Baking delicious cookies" },
                new Pusheen() { Id = 2, Name = "Pusheenosaurus", FavouriteFood = "Leaves", SuperPower = "Roarrrrr!" },
                new Pusheen() { Id = 3, Name = "Pusheenicorn", FavouriteFood = "Butterfly muffins", SuperPower = "Making rainbow poop" }
                
            }.AsQueryable();

            _mockPusheenService.Setup(p => p.GetAllPusheens()).Returns(data);

            //Act
            var result = _controller.ExportCsv();

            //Assert
            result.Should().BeOfType(typeof(PusheenCsvResult));
            result.FileDownloadName.Should().Be("pusheen.csv");
            result.ContentType.Should().Be("text/csv");

        }

        
    }
}

Final Thoughts

I hope you find this post useful and being part of my coding journey! Thank you for reading my blog! 🙂

You can find the link to my Github repo with the simple web app example containing the custom fileresult and test project here.

C# Creating a Custom FileResult to Export Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

C# Creating a Custom FileResult to Export Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

Introduction: What is the goal?

In the past two months at work, I was tasked with learning C#, as well as creating a web app using the .NET Core 3.1 MVC framework. I wanted to document the most interesting concepts in a series of blog posts.

In this blog, I will show you how to create a custom fileresult to export data into a CSV file using streaming in a .NET Core 3.1 MVC web app. This came about as I was asked to give the user the ability to export their dataset into a CSV file through the web app, but there was a lot of data to deal with. The goal here was to create something to export to a CSV file taking performance into account.

The actual code was more complex; this blog is my attempt to abstract the core concepts into a simple web app using Pusheen the Cat as a fun example!

P.S. If you don’t know who Pusheen is, I am utterly obsessed with it. You’ll probably recognise Pusheen from numerous gifs and stickers on social media.

C# Streaming: What is it and why is it useful?

Imagine you had a lot of data to read/write as part of a web app solution; you probably want to break it down into bitesize pieces, as you won’t want to read/write a large file in one go! This can be achieved through streaming.

A stream sits between the application and the file; the benefit of using streaming is the read/write operations are a lot smoother. When you’re writing data to somewhere, it is written to the stream and then from the stream, it then goes to your chosen destination, usually a file. When you’re reading data, you read to the stream and then your web app can then read from the stream.

If you like to read on a bit more, this article by Guru99 is particularly helpful.

“C# provides standard IO (Input/Output) classes to read/write from different sources like a file, memory, network, isolated storage, etc. System.IO.Stream is an abstract class that provides standard methods to transfer bytes (read, write, etc.) to the source. It is like a wrapper class to transfer bytes. Classes that need to read/write bytes from a particular source must implement the Stream class.” – Extracted from TutorialsTeacher

Creating a Custom FileResult With Streaming in .NET Core 3.1

Why Create a Custom Fileresult in the First Place?

The .NET Core 3.1 MVC framework provides the ActionResult class. Here’s the link to the docs for it. It implements the IActionResult interface. ActionResult is essentially the return type of a controller method, it is the base class for lots of result classes to return models to views, file streams and more! There are many derived classes to choose from, but there wasn’t one to produce a CSV result. I wanted a FileResult which has streaming and exported to CSV.

The cool thing was one of the derived classes is FileResult. This class represents an ActionResult that when executed will write a file as the response. We can extend on this class to create a custom FileResult to create a CSV FileResult.

Example of Custom FileResult With Streaming in .NET Core 3.1

Here is an example of a PusheenCsvResult which extends FileResult. As it is specific to exporting data about Pusheen the Cat, we give it some pusheenData as IEnumerable<Pusheen>. In it’s constructor, we pass in that data along with the fileDownloadName and set the file type to be "text/csv".

As we’re extending on FileResult, we override ExecuteResultAsync with our implementation. In this example, we’re using StreamWriter to write to the response body of the HttpContext of the ActionContext. We write the header row for our CSV file and then iterate through our _pusheenData and write that data. As a reminder, the stream sits in between the application and in this case the response body; the data is written to the stream (response body) and then from the stream, it then results in the CSV file being produced.

We define the StreamWriter within a using block. We use StreamWriter.FlushAsync method to clear all buffers for the current writer and results in any buffered data to be written to the underlying stream.

using System;
using System.IO;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;


namespace PusheenCustomExportCsv.Web.Models
{
    public class PusheenCsvResult : FileResult
    {
        private readonly IEnumerable<Pusheen> _pusheenData;

        public PusheenCsvResult(IEnumerable<Pusheen> pusheenData, string fileDownloadName) : base("text/csv")
        {
            _pusheenData = pusheenData;
            FileDownloadName = fileDownloadName;
        }

        public async override Task ExecuteResultAsync(ActionContext context)
        {
            var response = context.HttpContext.Response;
            context.HttpContext.Response.Headers.Add("Content-Disposition", new[] { "attachment; filename=" + FileDownloadName });

            using (var streamWriter = new StreamWriter(response.Body)) {
              await streamWriter.WriteLineAsync(
                $"Pusheen, Food, SuperPower"
              );
              foreach (var p in _pusheenData)
              {
                await streamWriter.WriteLineAsync(
                  $"{p.Name}, {p.FavouriteFood}, {p.SuperPower}"
                );
                await streamWriter.FlushAsync();
              }
              await streamWriter.FlushAsync();
            }
        }

    }
}

Using the Custom FileResult in the Controller

Now that we have the PusheenCsvResult class, we can go ahead and use it in the controller.

#The rest of the code has been omitted for brevity! 🙂

public FileResult ExportCsv()
{
    return File(_pusheenService.GetAllPusheens(), "pusheen.csv");
}

public virtual PusheenCsvResult File(IEnumerable<Pusheen> pusheenData, string fileDownloadName)
{
    return new PusheenCsvResult(pusheenData, fileDownloadName);
}

App Demo

Here’s some screenshots of what it looks like from the front-end!

1.jpg
2.png
3.png
4.png

Final Thoughts

I hope you found this useful! I thought an improvement for this PusheenCsvResult class would be to make it generic, so that it can work with all kinds of datasets, not just the Pusheen dataset! That’s for another day! 🙂

Watch out for my next blog on unit testing the Custom FileResult which Exports Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App. I have a series of .NET Core MVC blogs coming up, so it should be exciting times!

Thank you for reading my blog! 🙂

You can find the link to my Github repo with the simple web app example containing the custom fileresult here.

Kim’s Book & Podcast Recommendation! Hannah Fry’s Hello World and DeepMind Podcast :)

Kim’s Book & Podcast Recommendation! Hannah Fry’s Hello World and DeepMind Podcast :)

📚Book Recommendation

Hello World: How to be Human in the Age of the Machine by Dr Hannah Fry is a fantastic read on algorithms in our everyday lives and why we should not forget the role of the human in technology.

Hannah Fry brought concepts like privacy, trust, decision-making, machine learning and image recognition to life. By relating these concepts to a range of topics such as Art, Crime, Medicine, Law, Data and Power; she made the book a very accessible and engaging read.

You can find the link to purchase her book here

helloWorld.jpg

🎧Podcast Recommendation

DeepMind: The Podcast, hosted by Dr Hannah Fry, is a fantastic series on AI research.

You can find the link to listen to the podcast series here

Intro to Docker Containers & Microsoft Azure Part 1 – Beginner’s Guide to Containerising a Hello World Python Web App 

Intro to Docker Containers & Microsoft Azure Part 1 – Beginner’s Guide to Containerising a Hello World Python Web App 

Greetings!

Hi everyone, it’s great to be back again! This blog is part one of a two-part series on Docker and Microsoft Azure. In Part 1, we will containerise a Hello World Python web app using Docker. In Part 2, we will learn how to build and push the container image using devOps pipelines on Microsoft Azure.

Prerequisites:

Before we get stuck in, here are some prerequisites:

Containers

Here is a useful link if you would like to have a quick 5 min intro to containers.

Docker

Docker is a set of platform-as-a-service products that use OS-level virtualization to deliver software in packages called containers. (Ref: https://en.wikipedia.org/wiki/Docker_(software))

It mitigates against the classic: “But it works on my machine problem!” and streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide the applications and services. (Ref: https://docs.docker.com/engine/docker-overview/).

Containers are great for continuous integration and continuous delivery (CI/CD) workflows. Did I mention we can even integrate it with Azure? ☁️

You can get started on the Docker documentation here.

Docker Desktop

This blog assumes you have Docker Desktop Installed. To install Docker Desktop, you can use this link. I’m using my lovely Macbook Pro 💻 for this – hehe! 😉; but you can choose whether you want to download Docker for Mac or Windows.

For this tutorial, you will need Python 3.7+ by going to the following link.

Pip

You will also need the latest version of pip which is a recommended tool for installing Python packages.

To check you have the right Python and pip versions, you can use the commands:

python --version
pip --version
Checking Python and Pip Versions
Checking Python and Pip Versions

Now, onto the fun stuff! 🏄‍♀️

Step 1: Project Setup – Create a Project Directory

First, let’s create a new project directory called hi-there-docker. Please feel free to call your project directory any name you want, but just remember to reference it throughout this blog.

Open up the project directory in your favourite code editor. I find Visual Studio Code works quite well if you’re starting out.

Step 2: Project Setup – File Setup

Next, let’s create a requirements.txt file in the directory; it is good practice in Python projects to have this file to manage packages.

Top Tip! 💡

If you are using Windows, you can create a function called ‘touch’ for the UNIX fans out there which enables you to use the touch command to create a new file in Windows Powershell. Enter the following command in Windows Powershell to enable this:

function touch {set-content -Path ($args[0]) -Value ($null)}

In the requirements.txt file, enter the package ‘Flask==1.0.2’, as we will need Flask to create the hello world application.

requirements.txt
requirements.txt

Finally, enter the following in the terminal to install the packages listed in requirements.txt.

pip install -r requirements.txt

Step 3: Python – Create a Hello World Flask App

I won’t be going into detail on how Flask works, but you should check it out if you’re interested.

We’re now onto Step 3, let’s move onto creating a new file called main.py.

In the main.py file, enter the following:

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
   return "Hello World!"

if __name__ == "__main__":
   app.run(host="0.0.0.0", port=int("5000"), debug=True)

This is a very simple app and has one route with ‘Hello world’.

main.py
main.py

Step 4: Create a Dockerfile

Once that’s complete, we can move onto Docker! Let’s create a Dockerfile. The Dockerfile contains settings such as the base image for the container and Python version, as well as dependencies on build to produce the container image.

Create a Dockerfile in the root directory of your project folder and enter the following:

FROM python:3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./main.py

Your Dockerfile should look something like this:

Dockerfile
Dockerfile

Step 5: Let’s Build the Image

Before we can build the image using Docker, let’s confirm the Docker CLI is working by typing the following into your terminal:

docker --version
Checking Docker Version
Checking Docker Version

Also, check that Docker Desktop is up and running, there should be a cute little icon of the Docker whale on your desktop panel.

Docker Whale
Docker Whale

Now we are ready to build the image! 💿

You can tag the image with a name of your choosing, but let’s use the name hi-there-image

Ensure you are in the root directory of the project and enter the following into the terminal:

docker build --tag hi-there-image .
Building the Image
Building the Image

🎉 You’ve built your first image using Docker and tagged it with a name – woohoo! 🎉


Step 6: Running the Image as a Container

Once the Docker image has been built, you are now ready to run the image as a container.

Our container will be called:

hi-there-container

And our image name is:

hi-there-image

To start the application as a container, enter the following into the terminal:

docker run --name hi-there-container -p 5000:5000 hi-there-image
Running the Image as a Container
Running the Image as a Container

🎈That’s it! You are now running the image as a container! 🎈

Step 7: Go to the App

🥳 Now you can go to your app at http://localhost:5000 🥳

Go to the App
Go to the App

Final Step: Viewing and Managing Containers using the Docker CLI

#To display a list of running containers:
docker ps

#To stop running one or more running containers:
docker container stop [container name(s)]

#For example, if we wanted to stop running the hi-there-container, we can run the following command in the terminal:
docker container stop hi-there-container

#To remove one or more containers:
docker container rm [container name(s)]

#For example, if we wanted to remove the hi-there-container, we can run the following command in the terminal:
docker container rm hi-there-container

#To confirm that the container is no longer running, check that it is no longer in the list:
docker ps

😊 Congratulations, you have just built and containerised a Hello World App using Docker! 😊

🤔 What’s next?

If you want to explore more on how to build and push the images using Docker as tasks within the Microsoft Azure Pipelines, watch out for Part 2 of this blog series. ☁️

My First 3 Weeks as a Software Engineer Summarised in 10 Quotes and Emojis!👩‍💻

My First 3 Weeks as a Software Engineer Summarised in 10 Quotes and Emojis!👩‍💻

Hi again! 🙂

I just finished my first 3 weeks as a Software Engineer. Woohoo! Here is my experience so far in 10 quotes and emojis! 👩‍💻

1 👩‍💻🤩 “I’m so excited, I can’t sleep!. 😴Hey, get to sleep already! You need to be fresh for tomorrow!” 👩‍💻🤩 “Awwww…but I’m too excited!”

2 💃“Oooo…how does a Microsoft Surface Pro even work? What’s this flappy thing at the back?”

3 “Hello Microsoft Windows, haven’t used you in a while!…Windows Update!!! 😂”

4 ”🤓 C# is so cool!!!! I finally get to use a static programming language!”

*Goes to build solution = ERROR ; expected

AHHHHHHHHHHH!!!!!!! 👿

5 “Let’s get this code out! Deployment, what can possibly go wrong with the build and deployment pipeline? 🚀”

Deployment fails…arghhhhhh!!!!! 😡

Goes into ultra mode to fix issue 💪

Deployment passed – yipppeeee!!! 😎

6 “It works, it actually works!!! Oh yeah!! Ice cream time!” 🍦🙌🏼😃

7 “I can figure this out…I can figure this out…should I ask for help? 🤔Nah, I can figure this out…I can figure this out…oh man…I NEEEDDD HELP!!! SOMEONE HELP!!”

8 “How have I not used Visual Studio IDE before? It’s A-M-A-Z-I-N-G!!” 💻

9 “Am I meant to be here? I don’t have a clue what’s going on. Damn…I’m actually writing code!” 🎊

10 “Wow, I’ve learnt so much! How did all of this happen?” 😊

Final Thoughts

As you can see, a lot of ups and downs, but I’m absolutely loving it! 🙂

Byeeeeeeee,

Kim

I got my first ever job as a Software Engineer!!!

I got my first ever job as a Software Engineer!!!

I got a job as a Software Engineer!!!

kim_morning.jpg

Getting ready to leave this morning for my first day!

I GOT A JOB AS A SOFTWARE ENGINEER! 🥳🎊🥂🎈

Today, I am so excited to announce my first ever role as an Associate Software Engineer at M&G Plc! The career change has been an adventurous ride; dreams really do come true!

I thought I would never be able to do this without a Computer Science degree, but I was proven wrong by the amazing tech community who supported me!

If you’re interested on hearing more about my career change, check out a Q&A on my Coding Journey here.

Special Thanks!

Thank you to all those who have been part of my tech journey so far!

I also want to do a big shout out to EVORA Global, Makers, Codebar, Code First: Girls, Women Who Code, Girls in Tech, Inc., Tech for Good and Rails Girls for supporting my journey into tech!

EVORA Global

My friends at the sustainability consultancy are a bunch of fantastic people, who inspired me to come into work everyday to do my best. After a year as a Junior Sustainability Consultant on the consulting team, I was given the opportunity to put forward some ideas to grow the company’s proprietary technology solution, which was focused on the sustainable real estate industry. The solution had a data-driven approach to support on environmental data management and reporting for commercial real estate sector clients.

I’m so grateful that the company put its trust in me and offered me the chance to be part of their new technology team. It was pretty much a blank slate at the time and something felt ‘right’ about it. Though I hesitated to leave my sustainability consulting days behind, I was elated at the prospect of being part of something special.

For over 2 years, I grasped with the concept of Agile and engaged with the team on the software development of the sustainability software through the implementation of agile product/project management strategies, business requirements gathering and specification. I learned about scrum iterative software development and how to use Atlassian’s JIRA tool.

Thank you EVORA for being part of my tech journey!

If you haven’t heard of EVORA, definitely check out their website here!

Code First: Girls

In September 2017, I joined the Code First: Girls Web Development Beginner’s Course and never looked back again. I remembered dashing off after work an evening a week to the Twitter UK HQ, being utter exhausted, but felt so excited and energised to code. I had sessions once a week which covered so many things such as HTML, CSS, JavaScript, jQuery, GIT, GitHub collaboration, development concepts, Twitter Bootstrap and responsive web development and even had the chance to work on a group project.

As soon as that ended, I enrolled onto the Code First: Girls Advanced Ruby Course, where I was introduced to Ruby programming, Sinatra framework, GET/POST requests, development concepts, automated emails using Mailgun, external APIs and deployments. I got to understand application deployment on Heroku cloud hosting services and explored the Twitter API.

Thank you Code First: Girls! You really helped me to find my passion. As a Code First: Girls alumna, I feel like I could change the world!

If you haven’t heard of Code First: Girls, definitely check out their courses. They have free community courses and professional courses aimed at getting women into tech.

Rails Girls London

The 2-Day Rails Girls London Installation Party and event in December 2017 was cool beans! Held at Deliveroo, there were some inspiring lightning talks and coaching.

There’s plenty of materials online too! Watch out for their next event!

Codebar London

Codebar is growing so fast across the UK and the world. I have been a student as part of the Codebar London chapter for a while now. It’s been cool to meet everyone over good food and code! Yummy!

Thank you to all the coaches who have coached me so far, your workshops have been so insightful!

Technology for Good & Women Who Code London

Going to talks through https://www.meetup.com/ run by Tech for Good and Women Who Code London really got me thinking about the application of coding for specific social causes. I’d recommend checking out their upcoming events!

Makers

By the time I encountered Makers in late 2018, I was sure that Software Engineering was for me. I attended the Demo Day events and was blown away by the projects created by the students there. The Intro Cohort was a useful time for me to meet up with like-minded people and code together.

Entering Makers felt like home to me and I imagined myself there one day. I actually looked at several coding bootcamps across the UK, but was deterred by the costs. Luckily, I came across the Fellowship Programme at Makers and applied! It was the best decision I had made. My interview was challenging; though it was one of the most enjoyable interviews I have ever had!

Girls in Tech London

I went to a conference organised by Girls in Tech London during London Tech Week 2019 on the intersections of tech and benevolence. It was a thought-provoking evening and I left feeling inspired to hack my tech career!

Thank you Girls in Tech London!

Final Thoughts

Onwards and upwards! 👩💻😍 I’m so happpyyyyyy!!!

Byeeeeeeee,

Kim

A Beginner’s Checklist to Starting Your First Machine Learning Project

A Beginner’s Checklist to Starting Your First Machine Learning Project

Thinking about setting up your first machine learning project and don’t know where to start? This beginner’s checklist will walk you through a step-by-step thought process to get you started!

☑️ Step 1: Get a feel for what Machine Learning is all about 🙂

I’m assuming you arrived on this blog because you’ve heard of the concept of Machine Learning (ML) and Artificial Intelligence (AI) and watched a couple of videos here and there!

If you haven’t done so already, you can explore more through watching some cool TED talks here.

You can also explore an online course – there are many free courses available. You can check out this one by Udacity on ‘Intro to Machine Learning’ 🤖.

Don’t worry about writing the code yet, just get a feel for what’s happening in the Machine Learning world. Machine Learning is a concept within Artificial Intelligence (AI), as AI covers many fields. Here’s a fantastic blog if you would like to explore more about ‘Machine Learning vs. Artificial Intelligence’.

Ok, onto the next step! 😊 Don’t worry about achieving something perfect the first time round, the best way to learn is to get stuck into a small project.

☑️ Step 2: Try out a small project!

First things first, follow a tutorial to help you get started! Build something small to begin with and ask questions like:

  • ‘What does the data source look like?’
  • ‘How is the data being formatted?’
  • ‘What is the function of the code?’
  • ‘Why is this line of code here?’
  • ‘How is the machine learning model working?’
  • ‘How is this implemented in the code?’

This tutorial from Scikit-Learn is a good starting point to help you to get stuck in. The example shows how Scikit-Learn can be used to recognise images of hand-written digits.

Experiment! Try changing the type of classifier and performance metrics to see if this makes a difference to the ability of your model to identify the handwritten digits.

Congratulations! 🎉😎 You just built your first machine learning project! Take it easy ok, there’s a lot to take in already.

☑️ Step 3: What’s the problem you’re trying to solve?

Once you tried one or two example projects, you can start to tackle your very own one!

Here are some questions to help you:

Is Machine Learning the right approach for your project?

Sketch out some ideas on your notebook and refine your idea. What questions are you trying to answer? What is your goal? Start small! Machine Learning may or may not be the right approach for your project, so before you invest a lot of time, share your idea around to sense check it is right for you.

Are you trying to work with images? Are you working with numerical data?

Understand what kind of data you will be working with – this will guide you towards the appropriate solution for your problem.

☑️ Step 4: Data Acquisition and Understanding the Dataset

Where are you going to get your dataset from?

Before you can build the Machine Learning model, you need access to a dataset. For all projects, data acquisition is a very important step.

How big is your dataset? Is it the best dataset for your project? Are there issues with the data?

Delve into your dataset; understand it’s structure. What is the format of the data? What are the key features of the dataset? Which parts of the dataset do you want to capture? Which bits are relevant? Is your dataset big enough?

N.B. You may not need all of your dataset. Be aware of biases in the dataset sample itself!

Wow! That’s a big step out of the way, now onto choosing your model. 🙂

☑️ Step 5: Which modelling approach is suitable for the domain you’re working in?

Are you going to let the model learn by itself (unsupervised learning), or are you going to guide the ML training through (supervised learning)? Hopefully from the previous steps, you should have a jist of the problem type. Is it a classification, regression, clustering problem or something else?

Here’s a cool Machine Learning Map to help you decide.

☑️ Step 6: Data Processing and Formatting

Ok, data is never in the form you want it to be…there will be some data processing and formatting to get the data in a form that’s suitable for your machine learning project.

☑️ Step 7: Machine Learning

There are so many options out there. Best to explore for yourself and pick what rocks your boat 🚣. Tensorflow and Keras is a good combo, as well as Scikit-Learn 🙂 There are pros and cons for the technologies you choose. If you want, you can even set up an online coding notebook like CoLab notebook 📔 (pretty much a Jupyter notebook for the Python fans out there), so you can experiment a bit. Did I mention you can run your machine learning using a GPU for super speedy stuff?

If you want a quick run down on the techniques of Machine Learning, check out the crash course from Google.

☑️ Step 8: Data Splitting

Once you have your dataset ready, a consideration is splitting your dataset into a training and a testing dataset. The training dataset is the dataset your ML model will train on; your testing dataset is the dataset your model will be tested against to check how well the model performs.

Top Tip! It is important to randomise the dataset before you split it, so the order of your dataset doesn’t have a major impact on the model training process.

There are many mathematical approaches to measure model performance; but it is important to be aware of model overfitting. This is when the model is too reliant on the data and biased to the training dataset.

The rule of thumb for proportions is generally 90% of the dataset for training / 10% of the dataset for testing, but we have also seen 75% / 25% splits as well as 80%/ 20% splits.

☑️ Step 9: Model Training

Model training is the official term to mean “Run the Machine Learning model LOL! It’s about time!” All the hard work so far has paid off! You are ready to train your model! Good luck! 👍

Here is a non-exhaustive list of the things you may want to consider:

  1. Where are you going to do the model training? If your dataset is massive, you may consider how long the training process may take.
  2. Consider doing test runs on a small sample of your dataset to check that your model can actually train! Seriously, you don’t want to be waiting around for ages and come back to find that there were bugs in the way you interfaced the data to the machine learning model! (Been there and done that LOL 😭)
  3. How many times is your model going to run through the training dataset?

☑️ Step 10: Model Fitting & Model Tuning

Once you have a trained machine learning model, check how well it performs by testing it against a test dataset (a fancy way of saying the “data your machine learning model has never seen before”).

Have a think about how you measure the model performance.

Here are some strategies to improve the performance of your machine learning model, beware of overfitting of course!

  1. Go back to the data source! Is this the best data source for your model? Is there any pitfalls to your selected dataset. If not, maybe you can increase the sample size (how much data you’re using).
  2. Try choosing another machine learning model algorithm and do a exercise to see which one yields the best result
  3. Play around with the proportion of data you set aside for training and testing
  4. Refine the training process: see if you can increase the number of times you run through a dataset, although this will slow down the training process

Final Thoughts

You totally rock! Give yourself a pat on the back! Congratulations on doing Machine Learning 🎉🎉🎉🎉🎉🎉🎈🙌

Byeeeeeeee,

Kim