Things I Wished I Knew About DevOps Practices and Cloud Technologies When I Started my First Role in Tech

Things I Wished I Knew About DevOps Practices and Cloud Technologies When I Started my First Role in Tech

It’s 2021 and I’m just over a month into my third role as a Software Engineer & Tech Coach. It’s been a whirlwind of a journey so far! Here’s some things I wished I knew about DevOps practices and cloud technologies when I started my first role in tech.

My role wasn’t just about full-stack Software Engineering in C#, but also involved DevOps practices and Cloud technologies

During my career switch into tech, I thought that DevOps practices and Cloud technologies were utilised solely by DevOps Engineers and Cloud Engineers. I under appreciated how much of my role involved DevOps practices and Cloud.

When I spoke to people in my network and especially those who have recently started their first roles in technology; it seemed like there was a mixed bag. Some people were not involved in DevOps and Cloud at all, though they mentioned some of their colleagues were. Others, like myself had more of a hybrid role and some people were doing DevOps and Cloud every single day!

What is DevOps in a nutshell?

AWS states, “DevOps is the combination of…philosophies, practices, and tools that increases an organisation’s ability to deliver applications and services…”. The infrastructure and process that sits behind software ensures a smoother experience for building code, testing it, shipping it out and monitoring it.

DevOps and Cloud is there to help Developers

Some Software Engineers would say that DevOps and Cloud is not part of their role, so why should they bother; they do have a point there. It’s a massive world, recently, product offerings like AWS Amplify for example, help those who major on the front-end and API domains build mobile/web apps quickly. However, there’s value in learning some of the key concepts on how DevOps and cloud is helpful.

In my first role in tech, I wanted to learn some fundamentals of DevOps and Cloud that would support me in my role as a C# Full-Stack Software Engineer.

In my team at the time, one of the projects we were tasked with was re-writing a legacy Excel application into a .NET Core 3.1 C# web application (at the time of writing this post, it’s .NET 5). I really liked the way my team worked together on this, all the developers/testers, business analysts, our product owner and scrum master mobbed on this.

Something popped into my head at the time: “Why can’t we just build the web application and then just deploy it to production for the users, easy right? I can just click around on the Azure Portal and just manually make my resources there and then manually deploy.”

Well, when we started mob programming on the cloud infrastructure process, I realised there was more to just ‘making something work’.

Automated Continuous Integration & Continuous Deployments Using Azure Repos & Pipelines

One of the things that stuck with me was CI/CD (Continuous Integration / Continuous Deployment). According to the AWS DevOps blog, “An integral part of DevOps is adopting the culture of continuous integration and continuous delivery/deployment (CI/CD), where a commit or change to code passes through various automated stage gates, all the way from building and testing to deploying applications, from development to production environments.”

I got to appreciate this by learning about git, git repositories on Azure repos, managing branches and creating pipelines to build and deploy our C# solution.

During my learning process, I had a sneak peak at how different teams were utilising Azure Pipelines. At first I was hard-coding things in and this sort of worked, but then I found myself copying and pasting all the time. I then realised parameterisation was helpful to ensure I could supply different values for the same pipeline variables. This helped me as a developer and for other developers on my team because it meant we could replicate the same setup across the development environments, testing environments, pre-production and production environments of the pipeline. We could configure things to be switched ‘on’ and ‘off’ through code.

Separation of concerns was important here. We decided to go with an infrastructure pipeline and an app pipeline. If there were changes to the web application on a branch, CI/CD will automatically detect this and trigger a build and deploy onto the relevant environments using the relevant pipelines. Test suites would also run automatically too. Once the Pull Request (PR) for the branch has been approved and merged, the CI/CD pipeline will build and deploy to the environments. No more arduous manual deployments that we had to deal with for the original Excel application! Great!

Infrastructure-as-Code

During my first role, I realised that clicking around the settings on the Azure Portal to create and configure resources was helpful for me, but not helpful for others. It wasn’t repeatable. We had to think as a team how we can define the infrastructure and configure it using a better approach. This was where the Azure Resource Manager (ARM) templates came in handy. It enabled the definition of what infrastructure we wanted to make, how we wanted to make it and configure it.

The ARM templates were useful as they could be version controlled through git as well; just like we would version control code. There were also helpful extensions on Visual Studio for structuring and validating these templates.

Most importantly, it enabled a repeatable and testable process for our infrastructure.

Logging & Monitoring

So why do we need logging & monitoring? Let me put it this way, when you release a new feature for your product, that’s just the start. Just as a plane has a suite of telemetry to record readings from instruments; it is the same concept for software to ensure everything is operating as it should. Try to think where logging and monitoring makes sense for you.

We used Azure Monitor to add observability into our applications, infrastructure and network.

Final Thoughts

This is just the surface of what DevOps and Cloud technologies can offer to developers, of course there are specialists who go a bit deeper into more concepts that those I’ve covered here. If you are working in tech, there is some benefit to learning some of the fundamentals about the infrastructure and process that sits behind software to ensure smoother experiences for building code, testing it, shipping it out and monitoring it.

Hey Kim, what’s it like being a Software Engineer & Tech Coach? Q&A Session

Hey Kim, what’s it like being a Software Engineer & Tech Coach? Q&A Session

How did you become a Software Engineer & Tech Coach?

I didn’t plan this path; it totally happened by accident! 😂

It was only back in February 2019 that I received a fully-funded scholarship to attend a 16-week intensive Software Engineering Bootcamp at Makers. I was a career switcher, having at the time spent over 4 years in sustainability and business consulting roles.

I have since been exposed to full-stack Software Engineering and DevOps practices from a range of roles and industries such as investment management, e-commerce and tech education.

While in my Software Engineer role at Trainline, which is a FTSE 250 rail and coach ticketing platform, a random advert popped up in my LinkedIn feed in December 2020 and it was for a Software Engineer & Tech Coach role at Tech Returners.

When I read the job advert, negative thoughts started coming to my head:

😥 “Am I doing the right thing? Is it too early in my tech career to do this? I’ll be leaving a FTSE 250 company, will I regret it?”

😥 “Am I even qualified for this? There’s only some technologies on the job description I know well, those I know enough to get by and those where I don’t have a clue yet!”

Somehow because these thoughts came into my head, I wanted to pursue this more than ever! I tried to map things out rationally and thought about what I enjoyed doing, which was my experience teaching people to code and creating workshops for the community alongside my friends, speaking and mentoring work. However, I still wanted to keep on being an active Software Engineer, so the role was a great blend for me.

🙌 I applied for the role, did my 2-minute elevator pitch video, had my interviews and landed the job! 🙂

What do you do as a Software Engineer & Tech Coach?

It’s been just over a month since I started my role as a Software Engineer & Tech Coach at Tech Returners – whoop whoop! 🙂 It’s a hybrid role which means I get to do tech coaching and software engineering.

As a Tech Coach, I help to deliver programmes to upskill individuals at mid-senior levels in technology. Since learners on the programme have prior tech experience, it means I have the opportunity to explore tech concepts in a bit more depth. I’m currently leading sessions, helping with seminars on tech topics, having 1:1s with learners/pair programming with them, recording short videos and providing detailed code review feedback. I onboarded remotely and went straight into all the action. By Day 3, I was already delivering some sessions!

💜 I remember my first week watching in awe as the other Tech Coaches, James, Ellie and Heather did their thing! They conducted their roles with care, precision and best practice; I honestly wondered why people hadn’t heard of Tech Returners before.

One of my goals is to design and develop a 5-star curriculum to really innovate tech education. As a Software Engineer, I’m working on internal projects across the full software development lifecycle. There’s a project I’m working on which is totally brand new, a great chance for me to be involved with a product from scratch.

How do you go about teaching technical concepts? What techniques do you use?

👩🏻‍💻 I use plenty of visualisations

👩🏻‍💻 I explain tech concepts using analogies and relate them to real-life things

👩🏻‍💻 I record short videos/screen recordings to walkthrough tricky technical concepts, provide thorough code review feedback and help with debugging strategies

👩🏻‍💻 I conduct 1:1 and group video calls to host sessions, webinars and provide technical and wellbeing support

👩🏻‍💻 I try to start from the core principles and break down technical jargon as much as I can to make it sound less daunting. Some technologies like git version control use scary words which create a barrier to learning; even though the technology can be very powerful.

How do you balance learning & coaching?

I create and deliver programs to help underrepresented people refresh & upskill in the Software Engineering domain after a career break. I also deliver programs to upskill engineers at existing companies in all things DevOps!

That’s a lot of technologies! How can you keep up?

I would say I’m aiming for a T-shaped skillset. This means I have deep expertise in few technologies with supporting, but less-developed skills in others. For example, I am more backend/cloud focused with my primary language being C#; but if I get asked a question about React components and how to test components, though that’s not my area of expertise, I’m able to conduct some research, put the pieces together or reach out to other tech coaches to put technical recommendation together.

Over time, I develop a skill for spotting patterns in code quickly whatever the tech stack/languages used. Learners think I do some magic! The reality is, I don’t, it’s patterns I see again and again which helps me to spot things quickly.

Developing myself while teaching others

I listened to the egghead.io developer chats podcast episode featuring Ali Spittel on Developing Yourself While Teaching Others and I found so much inspiration from Ali Spittel’s journey.

Through going through the cycle of learning & coaching, I found myself solidifying my understanding of tech concepts and technologies at a faster rate than if I were to learn without teaching others. Before being a Tech Coach, I would become impatient and skip over a tech concept quickly just to ‘make things work’; I am now focusing deeper and with higher precision with my learning to enable me to provide the best technical coaching.

Since I’m not writing production code as often, I set some time aside (25 mins to 1 hour) in the morning before my work commitments to develop myself by building my own projects, practicing my coding skills or researching technical concepts. I don’t code on the weekends and in the evenings after 6pm because I find it’s important to have some time off. I’m trying to learn how to be a more effective and efficient learner every day. I also repeat technical concepts again and again, rather than moving on too quickly.

What do you enjoy most about being a Software Engineer & Tech Coach?

I love seeing others learn and grow in their technical skills and confidence. It’s not just about the technical journey, but the human one too.

I also really like the challenge of finding new ways to explain technical concepts and technologies in digestible ways. I like the feeling I get when I get asked a question from the learners and I have the opportunity to go in and explore for myself.

I like pair programming and mob programming with the other Software Engineers & Tech Coaches so we can all learn together and continue to innovate tech education.

What would you say are the most challenging aspects of your role?

From a technical standpoint, there’s times where I doubt myself and my abilities and I start to think: “What if I get caught out?” “What if I get asked a question and I don’t have a clue how to answer it yet?” “Surely, I’m the tech coach and I should know everything right?” I always have to remind myself about my T-shaped skillset and that I don’t have to be an expert in everything.

From an emotional standpoint, I have a duty of care for my learners, which means I provide support from a wellbeing standpoint and ensuring I listen to my learners and help them find ways to move forward and reflect themselves. Therefore, I have to be more disciplined with the way I use my time more than ever, so that I can focus on providing the best support possible; whilst also making sure I take care of myself and prioritise my own time for my learning before I can support others.

How are you continuing to develop yourself? What’s in store for the future?

For my T-shaped skillset, I decided that I would focus on C# as my primary language. In terms of tech stack, I’m focusing on the backend and DevOps side of things. I’m not a specialist in HTML, CSS and React, though I’m able to work with it as best as I can.

I love creating content, designing, developing and innovating tech education, so would love to continue to create workshops for the community and do some public engagements around technology, such as my most recent collaboration with The National Museum of Computing and the Codebar Festival.

Thanks for reading! 🙂

C# Repository Design Pattern for Database Operations in a .NET Core 3.1 MVC Web App

C# Repository Design Pattern for Database Operations in a .NET Core 3.1 MVC Web App

Introduction

When building applications, it is important to consider how and where you’re conducting database operations.

Entity Framework Database Context (DbContext) and the Controller

Building a basic template for a .NET Core 3.1 application using a scaffolding approach like the one from this Microsoft tutorial is a great starting point. Firstly, let’s have a look at a small code snippet generated from the scaffolding.

In this example, the PusheenController class has actions for CRUD (Create, Read, Update and Delete) operations against the database. Here, we are directly interacting with the Entity Framework DbContext class called PusheenCustomExportCsvContext and retrieving data about Pusheens from the database. The PusheenCustomExportCsvContext is injected as a dependency into the PusheenController. In this web app, dependencies are added to the service container in the ConfigureServices method in Startup.cs.

However, it is easy to end up with big controllers; big in the sense that there’s a lot of database operations logic built into the controller. Since the DbContext is a dependency of the controller, a further issue faced is if you were to test this, you would have to mock the DbSet and DbContext. It is definitely achieveable to mock we like; but we would have to mock the Provider, Expression, ElementType properties and GetEnumerator() method.

In larger applications, we would like to separate the concerns out into layers that are responsible for the business logic, presentation, database etc.

Example 1: DbContext and PusheenController

//Code omitted for brevity 🙂
namespace PusheenCustomExportCsv.Web.Controllers
{
    public class PusheenController : Controller
    {
        private readonly PusheenCustomExportCsvContext _context;

        public PusheenController(PusheenCustomExportCsvContext context)
        {
            _context = context;
        }

        //Code omitted for brevity 🙂

        // GET: Pusheen/Details/5
        public async Task<IActionResult> Details(int? id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var pusheen = await _context.Pusheens
                .SingleOrDefaultAsync(m => m.ID == id);
            if (pusheen == null)
            {
                return NotFound();
            }

            return View(pusheen);
        }

        // GET: Pusheen/Create
        public IActionResult Create()
        {
            return View();
        }

        // POST: Pusheen/Create
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Create([Bind("Id,Name,FavouriteFood,SuperPower")] Pusheen pusheen)
        {
            if (ModelState.IsValid)
            {
                _context.Add(pusheen);
                await _context.SaveChangesAsync();
                return RedirectToAction("Index");
            }
            return View(pusheen);
        }

What is the goal of the Repository Design Pattern and why is it useful?

Let’s assume we would like a presentation layer made up of controllers and views, and a service layer for the business logic and database operations. We can create a repository (in this case lumped into the service for simplicity) where our database operations and logic can sit.

The repository is in charge of interacting with the Entity Framework DbContext class, so the controller doesn’t have to.

Repository: Defining and Implementing the Interface

In this interface, we define an IPusheenService and implement the interface in the PusheenService.

Example 2: IPusheenService

//Code omitted for brevity 🙂
namespace PusheenCustomExportCsv.Web.Services
{
    public interface IPusheenService
    {
        Task<List<Pusheen>> GetAllAsync();
        Task<Pusheen> Create(Pusheen pusheen);
        Task<Pusheen> Update(Pusheen pusheen);
        Task<Pusheen> Delete(Pusheen pusheen);
        Task<Pusheen> FindPusheenAsync(int? id);
        Task<Pusheen> FindPusheenById(int? id);
        bool PusheenExists(int id);

    }
}

Below is an example of how PusheenService implements FindPusheenAsync and FindPusheenById. These database operations were originally coded directly into the controller as we saw in Example 1.

Example 3: PusheenService

//Code omitted for brevity 🙂

        public async Task<Pusheen> FindPusheenAsync(int? id)
        {
            var pusheen = await _context.Pusheens.FindAsync(id);
            return pusheen;
        }

//Code omitted for brevity 🙂

        public async Task<Pusheen> FindPusheenById(int? id)
        {
            var pusheen = await _context.Pusheens
                .FirstOrDefaultAsync(m => m.Id == id);
            return pusheen;
        }

//Code omitted for brevity 🙂

Let’s see what our controller looks like now. The key difference is that the PusheenController is a lot slimmer and we don’t need to interact with the DbContext directly anymore; that’s the job of the repository now! 🙂

Example 4: PusheenController

//Code omitted for brevity 🙂
namespace PusheenCustomExportCsv.Web.Controllers
{
    public class PusheenController : Controller
    {
        private readonly IPusheenService _pusheenService;

        public PusheenController(IPusheenService pusheenService)
        {
            _pusheenService = pusheenService;
        }

//Code omitted for brevity 🙂

        // GET: Pusheen/Details/5
        public async Task<IActionResult> Details(int? id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var pusheen = await _pusheenService.FindPusheenById(id);

            if (pusheen == null)
            {
                return NotFound();
            }

            return View(pusheen);
        }

        // GET: Pusheen/Create
        public IActionResult Create()
        {
            return View();
        }

        // POST: Pusheen/Create
        // To protect from overposting attacks, enable the specific properties you want to bind to, for 
        // more details, see http://go.microsoft.com/fwlink/?LinkId=317598.
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Create([Bind("Id,Name,FavouriteFood,SuperPower")] Pusheen pusheen)
        {
            if (ModelState.IsValid)
            {
                await _pusheenService.Create(pusheen);
                return RedirectToAction(nameof(Index));
            }
            return View(pusheen);
        }
//Code omitted for brevity 🙂

Final Thoughts

I hope you find this post useful and being part of my coding journey! Thank you for reading my blog! 🙂

You can find the link to my Github repo with the simple web app example here.

C# Unit Testing a Custom FileResult That Exports Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

C# Unit Testing a Custom FileResult That Exports Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

Introduction

In the past two months at work, I was tasked with learning C#, as well as creating a web app using the .NET Core 3.1 MVC framework. I wanted to document the most interesting concepts in a series of blog posts.

In my last blog, I demonstrated how to create a custom FileResult to export data into a CSV file using streaming in a .NET Core 3.1 MVC web app. In this follow on blog post, I will show you how to unit test the custom FileResult and the controller which produces this FileResult.

Again, the actual code was more complex; this blog was my attempt to abstract the core concepts into a simple web app using Pusheen the Cat as a fun example!

Unit Testing Custom FileResult With Streaming in .NET Core 3.1

In my last blog, I had a custom FileResult called PusheenCsvResult. To set the scene for unit testing, I used the NUnit testing framework, along with the FluentAssertions and FluentAssertions.AspNetCore.Mvc libraries, which provided a clear way to communicate what I was asserting in my test (i.e. what was the expected result). I applied the Arrange, Act, Assert structure for this and am still learning the best way to do it!

P.S. Would highly recommend the book Agile Technical Practices Distilled: A learning journey in technical practices and principles of software design.

What was the goal?

Let’s start off with the goal! When we’re working with unit testing, it’s helpful to define what it is we’re checking for. In this unit testing situation, I wanted a way to check that the PusheenCsvResult’s method ExecuteResultAsync was streaming (writing) the response correctly to the HttpContext’s response body.

How did I go about doing it?

Knowing this, I followed some tips in the Agile Technical Practices Distilled book to start from the assertion and work backwards (Assert, Act , Arrange). I didn’t just magically know what I needed; it took some time to get there.

Setting it up

I created a [TestFixture] for testing PusheenCsvResult and within the [SetUp], I defined a _httpContext_fileDownloadName and made a fake _fakeActionContext object.

The reason I did this was because the PusheenCsvResult’s method ExecuteResultAsync, took a parameter which was of type ActionContext.

Let’s recap on my last blog for a second, the job of ExecuteResultAsync is using StreamWriter to write to the response body of the HttpContext of the ActionContext and the stream sits in between the application and in this case the response body. The data is written to the stream (response body) and then from the stream, it then results in the CSV file being produced.

In the case of the unit test scope, I wanted to create a _fakeActionContext object as an instance of ActionContext and in it’s constructor, I set the HttpContext as the _httpContext I defined earlier in my test [SetUp]. This enabled the ability to check what was written to the response body of that _httpContext.

#Code omitted for brevity

    [TestFixture]
    public class PusheenCsvResultShould
    {
        private PusheenCsvResult _pusheenCsvResult;
        private string _fileDownloadName;
        private string _expectedResponseText;
        private DefaultHttpContext _httpContext;
        private ActionContext _fakeActionContext;

        [SetUp]
        public void Setup()
        {
            _httpContext = new DefaultHttpContext();

            _fileDownloadName = "pusheen.csv";

            _fakeActionContext = new ActionContext()
            {
                HttpContext = _httpContext
            };
        }
        
        [Test]
        public async Task GivenActionContext_ExecuteResultAsync_ShouldWriteLineToHttpResponseBody()
        {
            
            //Arrange
            var data = new List<Pusheen>()
            {
                new Pusheen() { Id = 1, Name = "Pusheen", FavouriteFood = "Ice cream", SuperPower = "Baking delicious cookies" },
                new Pusheen() { Id = 2, Name = "Pusheenosaurus", FavouriteFood = "Leaves", SuperPower = "Roarrrrr!" },
                new Pusheen() { Id = 3, Name = "Pusheenicorn", FavouriteFood = "Butterfly muffins", SuperPower = "Making rainbow poop" }
                
            }.AsQueryable();

            PusheenCsvResult _pusheenCsvResult = new PusheenCsvResult(data, _fileDownloadName);

            _expectedResponseText = System.IO.File.ReadAllText(TestContext.CurrentContext.TestDirectory + @"/TestData/expectedCsv.txt");

            var memoryStream = new MemoryStream();
            _httpContext.Response.Body = memoryStream;

            //Act
            await _pusheenCsvResult.ExecuteResultAsync(_fakeActionContext);
            var streamText = System.Text.Encoding.Default.GetString(memoryStream.ToArray());

            //Assert
            streamText.Should().Be(_expectedResponseText);
        }

    }

Let’s hop over to the test

For the [Test] itself, I checked that given an ActionContext, the method ExecuteResultAsync should WriteLine to the HttpContext response body.

I needed a PusheenCsvResult for my test, and that took 2 parameters for it’s constructor.

  1. data (as type IQueryable<Pusheen>)
  2. fileDownloadName (as type String)

I already defined fileDownloadName earlier in the [SetUp], so the next step was to make some data for the test scenario. In this case, a new List<Pusheen> as .AsQueryable() was created and I passed this into PusheenCsvResult’s constructor.

Based on this information, I wanted to make a file containing the expected text I would expect to see as _expectedResponseText. In my assertion, I checked that the text I got from the stream should match the _expectedResponseText for the test to pass.

Now, this was the tricky bit – how to deal with closed streams?

When I was testing this, I didn’t know what was wrong, as the test kept saying that it couldn’t access a closed stream. Since I defined the StreamWriter within a using block; the stream will be closed once it’s done its job. This was not a bad thing and is something I recommend doing in your implementation; but it meant I needed another way to access what was written to the stream for the purpose of the unit testing (in this case, the stream was the HttpContext’s response body itself).

I added some comments on the code snippet below to describe what was going on.

 // I create a new Memory Stream and set that stream as the Response Body of the _httpContext I'm using in my unit test scope
var memoryStream = new MemoryStream();
_httpContext.Response.Body = memoryStream;

//Act
// I await and pass the _fakeActionContext to my ExecuteResultAsync method. Reminder that I pointed the HttpContext of ActionContext to the _httpContext I made for testing
await _pusheenCsvResult.ExecuteResultAsync(_fakeActionContext);

//I need to make sure that I capture the contents of the memoryStream and store it against the variable streamText which I can access later in my assertion
var streamText = System.Text.Encoding.Default.GetString(memoryStream.ToArray());

Unit testing the Controller

The controller was a bit more straightforward. I used Moq to mock the PusheenService and its method GetAllPusheens() to return some data.

_mockPusheenService.Setup(p => p.GetAllPusheens()).Returns(data);

Here, I tested that ExportCsv on the PusheenController returned the result of type PusheenCsvResult and that the fileDownloadName and contentType were correct.

#The rest of the code has been omitted for brevity! 🙂

namespace PusheenCustomExportCsv.Tests.Controllers
{
    [TestFixture]
    public class PusheenControllerShould
    {
        private PusheenController _controller;
        private Mock<IPusheenService> _mockPusheenService;
        private Mock<IConfiguration> _mockConfig;
        private DbContextOptions<PusheenCustomExportCsvContext> _testDbOptions;
        private PusheenCustomExportCsvContext _testDbContext;
        
        [SetUp]
        public void Setup()
        {
            _mockPusheenService = new Mock<IPusheenService>();
            _controller = new PusheenController(_mockPusheenService.Object);
        }
        
        [Test]
        public void ExportCsv_Returns_CsvResult()
        {
            //Arrange
            var data = new List<Pusheen>()
            {
                new Pusheen() { Id = 1, Name = "Pusheen", FavouriteFood = "Ice cream", SuperPower = "Baking delicious cookies" },
                new Pusheen() { Id = 2, Name = "Pusheenosaurus", FavouriteFood = "Leaves", SuperPower = "Roarrrrr!" },
                new Pusheen() { Id = 3, Name = "Pusheenicorn", FavouriteFood = "Butterfly muffins", SuperPower = "Making rainbow poop" }
                
            }.AsQueryable();

            _mockPusheenService.Setup(p => p.GetAllPusheens()).Returns(data);

            //Act
            var result = _controller.ExportCsv();

            //Assert
            result.Should().BeOfType(typeof(PusheenCsvResult));
            result.FileDownloadName.Should().Be("pusheen.csv");
            result.ContentType.Should().Be("text/csv");

        }

        
    }
}

Final Thoughts

I hope you find this post useful and being part of my coding journey! Thank you for reading my blog! 🙂

You can find the link to my Github repo with the simple web app example containing the custom fileresult and test project here.

C# Creating a Custom FileResult to Export Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

C# Creating a Custom FileResult to Export Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App

Introduction: What is the goal?

In the past two months at work, I was tasked with learning C#, as well as creating a web app using the .NET Core 3.1 MVC framework. I wanted to document the most interesting concepts in a series of blog posts.

In this blog, I will show you how to create a custom fileresult to export data into a CSV file using streaming in a .NET Core 3.1 MVC web app. This came about as I was asked to give the user the ability to export their dataset into a CSV file through the web app, but there was a lot of data to deal with. The goal here was to create something to export to a CSV file taking performance into account.

The actual code was more complex; this blog is my attempt to abstract the core concepts into a simple web app using Pusheen the Cat as a fun example!

P.S. If you don’t know who Pusheen is, I am utterly obsessed with it. You’ll probably recognise Pusheen from numerous gifs and stickers on social media.

C# Streaming: What is it and why is it useful?

Imagine you had a lot of data to read/write as part of a web app solution; you probably want to break it down into bitesize pieces, as you won’t want to read/write a large file in one go! This can be achieved through streaming.

A stream sits between the application and the file; the benefit of using streaming is the read/write operations are a lot smoother. When you’re writing data to somewhere, it is written to the stream and then from the stream, it then goes to your chosen destination, usually a file. When you’re reading data, you read to the stream and then your web app can then read from the stream.

If you like to read on a bit more, this article by Guru99 is particularly helpful.

“C# provides standard IO (Input/Output) classes to read/write from different sources like a file, memory, network, isolated storage, etc. System.IO.Stream is an abstract class that provides standard methods to transfer bytes (read, write, etc.) to the source. It is like a wrapper class to transfer bytes. Classes that need to read/write bytes from a particular source must implement the Stream class.” – Extracted from TutorialsTeacher

Creating a Custom FileResult With Streaming in .NET Core 3.1

Why Create a Custom Fileresult in the First Place?

The .NET Core 3.1 MVC framework provides the ActionResult class. Here’s the link to the docs for it. It implements the IActionResult interface. ActionResult is essentially the return type of a controller method, it is the base class for lots of result classes to return models to views, file streams and more! There are many derived classes to choose from, but there wasn’t one to produce a CSV result. I wanted a FileResult which has streaming and exported to CSV.

The cool thing was one of the derived classes is FileResult. This class represents an ActionResult that when executed will write a file as the response. We can extend on this class to create a custom FileResult to create a CSV FileResult.

Example of Custom FileResult With Streaming in .NET Core 3.1

Here is an example of a PusheenCsvResult which extends FileResult. As it is specific to exporting data about Pusheen the Cat, we give it some pusheenData as IEnumerable<Pusheen>. In it’s constructor, we pass in that data along with the fileDownloadName and set the file type to be "text/csv".

As we’re extending on FileResult, we override ExecuteResultAsync with our implementation. In this example, we’re using StreamWriter to write to the response body of the HttpContext of the ActionContext. We write the header row for our CSV file and then iterate through our _pusheenData and write that data. As a reminder, the stream sits in between the application and in this case the response body; the data is written to the stream (response body) and then from the stream, it then results in the CSV file being produced.

We define the StreamWriter within a using block. We use StreamWriter.FlushAsync method to clear all buffers for the current writer and results in any buffered data to be written to the underlying stream.

using System;
using System.IO;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;


namespace PusheenCustomExportCsv.Web.Models
{
    public class PusheenCsvResult : FileResult
    {
        private readonly IEnumerable<Pusheen> _pusheenData;

        public PusheenCsvResult(IEnumerable<Pusheen> pusheenData, string fileDownloadName) : base("text/csv")
        {
            _pusheenData = pusheenData;
            FileDownloadName = fileDownloadName;
        }

        public async override Task ExecuteResultAsync(ActionContext context)
        {
            var response = context.HttpContext.Response;
            context.HttpContext.Response.Headers.Add("Content-Disposition", new[] { "attachment; filename=" + FileDownloadName });

            using (var streamWriter = new StreamWriter(response.Body)) {
              await streamWriter.WriteLineAsync(
                $"Pusheen, Food, SuperPower"
              );
              foreach (var p in _pusheenData)
              {
                await streamWriter.WriteLineAsync(
                  $"{p.Name}, {p.FavouriteFood}, {p.SuperPower}"
                );
                await streamWriter.FlushAsync();
              }
              await streamWriter.FlushAsync();
            }
        }

    }
}

Using the Custom FileResult in the Controller

Now that we have the PusheenCsvResult class, we can go ahead and use it in the controller.

#The rest of the code has been omitted for brevity! 🙂

public FileResult ExportCsv()
{
    return File(_pusheenService.GetAllPusheens(), "pusheen.csv");
}

public virtual PusheenCsvResult File(IEnumerable<Pusheen> pusheenData, string fileDownloadName)
{
    return new PusheenCsvResult(pusheenData, fileDownloadName);
}

App Demo

Here’s some screenshots of what it looks like from the front-end!

1.jpg
2.png
3.png
4.png

Final Thoughts

I hope you found this useful! I thought an improvement for this PusheenCsvResult class would be to make it generic, so that it can work with all kinds of datasets, not just the Pusheen dataset! That’s for another day! 🙂

Watch out for my next blog on unit testing the Custom FileResult which Exports Data into a CSV file Using Streaming in a .NET Core 3.1 MVC App. I have a series of .NET Core MVC blogs coming up, so it should be exciting times!

Thank you for reading my blog! 🙂

You can find the link to my Github repo with the simple web app example containing the custom fileresult here.

Intro to Docker Containers & Microsoft Azure Part 1 – Beginner’s Guide to Containerising a Hello World Python Web App 

Intro to Docker Containers & Microsoft Azure Part 1 – Beginner’s Guide to Containerising a Hello World Python Web App 

Greetings!

Hi everyone, it’s great to be back again! This blog is part one of a two-part series on Docker and Microsoft Azure. In Part 1, we will containerise a Hello World Python web app using Docker. In Part 2, we will learn how to build and push the container image using devOps pipelines on Microsoft Azure.

Prerequisites:

Before we get stuck in, here are some prerequisites:

Containers

Here is a useful link if you would like to have a quick 5 min intro to containers.

Docker

Docker is a set of platform-as-a-service products that use OS-level virtualization to deliver software in packages called containers. (Ref: https://en.wikipedia.org/wiki/Docker_(software))

It mitigates against the classic: “But it works on my machine problem!” and streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide the applications and services. (Ref: https://docs.docker.com/engine/docker-overview/).

Containers are great for continuous integration and continuous delivery (CI/CD) workflows. Did I mention we can even integrate it with Azure? ☁️

You can get started on the Docker documentation here.

Docker Desktop

This blog assumes you have Docker Desktop Installed. To install Docker Desktop, you can use this link. I’m using my lovely Macbook Pro 💻 for this – hehe! 😉; but you can choose whether you want to download Docker for Mac or Windows.

For this tutorial, you will need Python 3.7+ by going to the following link.

Pip

You will also need the latest version of pip which is a recommended tool for installing Python packages.

To check you have the right Python and pip versions, you can use the commands:

python --version
pip --version
Checking Python and Pip Versions
Checking Python and Pip Versions

Now, onto the fun stuff! 🏄‍♀️

Step 1: Project Setup – Create a Project Directory

First, let’s create a new project directory called hi-there-docker. Please feel free to call your project directory any name you want, but just remember to reference it throughout this blog.

Open up the project directory in your favourite code editor. I find Visual Studio Code works quite well if you’re starting out.

Step 2: Project Setup – File Setup

Next, let’s create a requirements.txt file in the directory; it is good practice in Python projects to have this file to manage packages.

Top Tip! 💡

If you are using Windows, you can create a function called ‘touch’ for the UNIX fans out there which enables you to use the touch command to create a new file in Windows Powershell. Enter the following command in Windows Powershell to enable this:

function touch {set-content -Path ($args[0]) -Value ($null)}

In the requirements.txt file, enter the package ‘Flask==1.0.2’, as we will need Flask to create the hello world application.

requirements.txt
requirements.txt

Finally, enter the following in the terminal to install the packages listed in requirements.txt.

pip install -r requirements.txt

Step 3: Python – Create a Hello World Flask App

I won’t be going into detail on how Flask works, but you should check it out if you’re interested.

We’re now onto Step 3, let’s move onto creating a new file called main.py.

In the main.py file, enter the following:

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
   return "Hello World!"

if __name__ == "__main__":
   app.run(host="0.0.0.0", port=int("5000"), debug=True)

This is a very simple app and has one route with ‘Hello world’.

main.py
main.py

Step 4: Create a Dockerfile

Once that’s complete, we can move onto Docker! Let’s create a Dockerfile. The Dockerfile contains settings such as the base image for the container and Python version, as well as dependencies on build to produce the container image.

Create a Dockerfile in the root directory of your project folder and enter the following:

FROM python:3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./main.py

Your Dockerfile should look something like this:

Dockerfile
Dockerfile

Step 5: Let’s Build the Image

Before we can build the image using Docker, let’s confirm the Docker CLI is working by typing the following into your terminal:

docker --version
Checking Docker Version
Checking Docker Version

Also, check that Docker Desktop is up and running, there should be a cute little icon of the Docker whale on your desktop panel.

Docker Whale
Docker Whale

Now we are ready to build the image! 💿

You can tag the image with a name of your choosing, but let’s use the name hi-there-image

Ensure you are in the root directory of the project and enter the following into the terminal:

docker build --tag hi-there-image .
Building the Image
Building the Image

🎉 You’ve built your first image using Docker and tagged it with a name – woohoo! 🎉


Step 6: Running the Image as a Container

Once the Docker image has been built, you are now ready to run the image as a container.

Our container will be called:

hi-there-container

And our image name is:

hi-there-image

To start the application as a container, enter the following into the terminal:

docker run --name hi-there-container -p 5000:5000 hi-there-image
Running the Image as a Container
Running the Image as a Container

🎈That’s it! You are now running the image as a container! 🎈

Step 7: Go to the App

🥳 Now you can go to your app at http://localhost:5000 🥳

Go to the App
Go to the App

Final Step: Viewing and Managing Containers using the Docker CLI

#To display a list of running containers:
docker ps

#To stop running one or more running containers:
docker container stop [container name(s)]

#For example, if we wanted to stop running the hi-there-container, we can run the following command in the terminal:
docker container stop hi-there-container

#To remove one or more containers:
docker container rm [container name(s)]

#For example, if we wanted to remove the hi-there-container, we can run the following command in the terminal:
docker container rm hi-there-container

#To confirm that the container is no longer running, check that it is no longer in the list:
docker ps

😊 Congratulations, you have just built and containerised a Hello World App using Docker! 😊

🤔 What’s next?

If you want to explore more on how to build and push the images using Docker as tasks within the Microsoft Azure Pipelines, watch out for Part 2 of this blog series. ☁️

My First 3 Weeks as a Software Engineer Summarised in 10 Quotes and Emojis!👩‍💻

My First 3 Weeks as a Software Engineer Summarised in 10 Quotes and Emojis!👩‍💻

Hi again! 🙂

I just finished my first 3 weeks as a Software Engineer. Woohoo! Here is my experience so far in 10 quotes and emojis! 👩‍💻

1 👩‍💻🤩 “I’m so excited, I can’t sleep!. 😴Hey, get to sleep already! You need to be fresh for tomorrow!” 👩‍💻🤩 “Awwww…but I’m too excited!”

2 💃“Oooo…how does a Microsoft Surface Pro even work? What’s this flappy thing at the back?”

3 “Hello Microsoft Windows, haven’t used you in a while!…Windows Update!!! 😂”

4 ”🤓 C# is so cool!!!! I finally get to use a static programming language!”

*Goes to build solution = ERROR ; expected

AHHHHHHHHHHH!!!!!!! 👿

5 “Let’s get this code out! Deployment, what can possibly go wrong with the build and deployment pipeline? 🚀”

Deployment fails…arghhhhhh!!!!! 😡

Goes into ultra mode to fix issue 💪

Deployment passed – yipppeeee!!! 😎

6 “It works, it actually works!!! Oh yeah!! Ice cream time!” 🍦🙌🏼😃

7 “I can figure this out…I can figure this out…should I ask for help? 🤔Nah, I can figure this out…I can figure this out…oh man…I NEEEDDD HELP!!! SOMEONE HELP!!”

8 “How have I not used Visual Studio IDE before? It’s A-M-A-Z-I-N-G!!” 💻

9 “Am I meant to be here? I don’t have a clue what’s going on. Damn…I’m actually writing code!” 🎊

10 “Wow, I’ve learnt so much! How did all of this happen?” 😊

Final Thoughts

As you can see, a lot of ups and downs, but I’m absolutely loving it! 🙂

Byeeeeeeee,

Kim

I got my first ever job as a Software Engineer!!!

I got my first ever job as a Software Engineer!!!

I got a job as a Software Engineer!!!

kim_morning.jpg

Getting ready to leave this morning for my first day!

I GOT A JOB AS A SOFTWARE ENGINEER! 🥳🎊🥂🎈

Today, I am so excited to announce my first ever role as an Associate Software Engineer at M&G Plc! The career change has been an adventurous ride; dreams really do come true!

I thought I would never be able to do this without a Computer Science degree, but I was proven wrong by the amazing tech community who supported me!

If you’re interested on hearing more about my career change, check out a Q&A on my Coding Journey here.

Special Thanks!

Thank you to all those who have been part of my tech journey so far!

I also want to do a big shout out to EVORA Global, Makers, Codebar, Code First: Girls, Women Who Code, Girls in Tech, Inc., Tech for Good and Rails Girls for supporting my journey into tech!

EVORA Global

My friends at the sustainability consultancy are a bunch of fantastic people, who inspired me to come into work everyday to do my best. After a year as a Junior Sustainability Consultant on the consulting team, I was given the opportunity to put forward some ideas to grow the company’s proprietary technology solution, which was focused on the sustainable real estate industry. The solution had a data-driven approach to support on environmental data management and reporting for commercial real estate sector clients.

I’m so grateful that the company put its trust in me and offered me the chance to be part of their new technology team. It was pretty much a blank slate at the time and something felt ‘right’ about it. Though I hesitated to leave my sustainability consulting days behind, I was elated at the prospect of being part of something special.

For over 2 years, I grasped with the concept of Agile and engaged with the team on the software development of the sustainability software through the implementation of agile product/project management strategies, business requirements gathering and specification. I learned about scrum iterative software development and how to use Atlassian’s JIRA tool.

Thank you EVORA for being part of my tech journey!

If you haven’t heard of EVORA, definitely check out their website here!

Code First: Girls

In September 2017, I joined the Code First: Girls Web Development Beginner’s Course and never looked back again. I remembered dashing off after work an evening a week to the Twitter UK HQ, being utter exhausted, but felt so excited and energised to code. I had sessions once a week which covered so many things such as HTML, CSS, JavaScript, jQuery, GIT, GitHub collaboration, development concepts, Twitter Bootstrap and responsive web development and even had the chance to work on a group project.

As soon as that ended, I enrolled onto the Code First: Girls Advanced Ruby Course, where I was introduced to Ruby programming, Sinatra framework, GET/POST requests, development concepts, automated emails using Mailgun, external APIs and deployments. I got to understand application deployment on Heroku cloud hosting services and explored the Twitter API.

Thank you Code First: Girls! You really helped me to find my passion. As a Code First: Girls alumna, I feel like I could change the world!

If you haven’t heard of Code First: Girls, definitely check out their courses. They have free community courses and professional courses aimed at getting women into tech.

Rails Girls London

The 2-Day Rails Girls London Installation Party and event in December 2017 was cool beans! Held at Deliveroo, there were some inspiring lightning talks and coaching.

There’s plenty of materials online too! Watch out for their next event!

Codebar London

Codebar is growing so fast across the UK and the world. I have been a student as part of the Codebar London chapter for a while now. It’s been cool to meet everyone over good food and code! Yummy!

Thank you to all the coaches who have coached me so far, your workshops have been so insightful!

Technology for Good & Women Who Code London

Going to talks through https://www.meetup.com/ run by Tech for Good and Women Who Code London really got me thinking about the application of coding for specific social causes. I’d recommend checking out their upcoming events!

Makers

By the time I encountered Makers in late 2018, I was sure that Software Engineering was for me. I attended the Demo Day events and was blown away by the projects created by the students there. The Intro Cohort was a useful time for me to meet up with like-minded people and code together.

Entering Makers felt like home to me and I imagined myself there one day. I actually looked at several coding bootcamps across the UK, but was deterred by the costs. Luckily, I came across the Fellowship Programme at Makers and applied! It was the best decision I had made. My interview was challenging; though it was one of the most enjoyable interviews I have ever had!

Girls in Tech London

I went to a conference organised by Girls in Tech London during London Tech Week 2019 on the intersections of tech and benevolence. It was a thought-provoking evening and I left feeling inspired to hack my tech career!

Thank you Girls in Tech London!

Final Thoughts

Onwards and upwards! 👩💻😍 I’m so happpyyyyyy!!!

Byeeeeeeee,

Kim

We did it! Top 5 Reflections – Machine Learning Final Project @ Makers

We did it! Top 5 Reflections – Machine Learning Final Project @ Makers

The Final Project at Makers

For the final project at Makers, I chose Art/Music AI as my topic of choice. I was assigned to a team called ‘AJAK’ to build a project of our choice.

For our project, we ended up using a Convolutional Neural Network Machine Learning model to classify doodles. The aim was for the user to input a doodle and the model outputs a prediction on what the user has drawn. In our app, the user can draw a camera, crown or rabbit.

We all came into the project with no/little knowledge on Machine Learning. We only had 1 1/2 weeks to complete the project, so it was a big achievement for us when we delivered our product on Demo Day!

You can check out our repo on Github!

Check out our app here: https://ajak-doodler.herokuapp.com/

AJAK Doodle App
AJAK Doodle App

We’re on Social Media!

If you missed the action, don’t worry! You can catch up via LinkedIn, Twitter or Facebook.

Check out the LinkedIn post

Here’s the Twitter post

We did it!!! 😍 @makersacademy thank you all, it’s been a blast and a great experience. Had so much fun on the group project #MachineLearning #Python #ArtificiallIntelligence #agile https://t.co/wtzT9HINOT

— Kim Diep (@thekimmykola) May 24, 2019

Missed the May 2019 Demo Day event @ Makers? You can watch the presentations on Facebook.

What’s it like to do a Machine Learning Project?

Here are my top 5 reflections:

#1 Machine Learning is flipping awesome!!!

I went into the project with some theoretical knowledge on Machine Learning, but no implementation know-how at all. Within 10 days, I fell in love with deep learning technologies and now feel equipped to do my own projects!

#2 Data acquisition and processing was a key part of the project

Even before the model can be trained, there was a lot of decision-making on where to get the data from, what the format of the source data was and data exploration to explore what was possible given the dataset. Data processing was important to get the data into the right format for our model.

#3 Building in Research & Development (R&D) time at the start of the project paid off

Given little team knowledge on Machine Learning, the first couple of days was spent on research. Whilst the other teams were putting code down, we hadn’t produced any code yet. This didn’t matter, as we took on a challenge and stuck to our team goals.

Personally, I learnt a lot from exploring a classification problem using the Handwriting MNIST dataset (literally the ‘Hello World’ of Machine Learning) and doing some crash courses using online tutorials.

We learned together as a team, used the whiteboard to break down our problem and made sure every team member understood the domain and choice of model. We chose to use a Convolutional Neural Network (CNN) in the end!

Understanding Convolutional Neural Networks (CNN)
Understanding Convolutional Neural Networks (CNN)

#4 Re-grouping as a team was useful to make informed decisions

There were a couple of moments in the project where we had to make pivotal decisions on the pros and cons of the technical implementation and balancing against delivering our Minimum Viable Product (MVP).

Re-grouping as a team and diagramming ideas out made it easier to be on the same page and created the space for ideas to be generated and decisions to be made!

Deciding on our technical architecture
Deciding on our technical architecture

#5 Sharing the love for Agile!

Having daily stand-ups, retrospectives and valuing communication over processes helped us to apply Agile theory to Agile practice! This made our team gel a lot better and made our project more engaging to create with the end-user in mind!

Final Thoughts

We delivered a kick-ass interactive project!

Thank you to my team for the wonderful journey into Machine Learning! 🙂 You guys were awesome – a pleasure working with you all 🙂

Byeeeeeeee,

Kim

An Experimental Mindset – Learning Quickly, Reflecting Deeply @ Makers

An Experimental Mindset – Learning Quickly, Reflecting Deeply @ Makers

It’s been just over six weeks since I embarked on my programme at Makers and wow has it been one crazy ride! As I’m sitting on the train, I reflect on what’s been happening, trying to digest it all. Settling into a new environment, new routine and meeting new people has been an exhilarating experience. So what’s been going on you might ask?

An Experimental Mindset – Learning Quickly, Reflecting Deeply

From an earlier blog, I spoke about putting on that child-like mindset, the boundless state of mind where creative juices run with the desire to experiment and make stuff!

Here’s the quote from my blog:

“When I was 5, I got my first computer and wanted to be a computer hacker. I imagined myself working undercover as a secret agent, making potions and hacking through computers, like Disney’s Kim Possible saving the world from monsters!”

The truth is, this childhood feeling never really left me. It was just hiding away, waiting to be re-discovered.

Over the past couple of years, I realised that at times when I thrived, it was a matter of having the courage to take risks and the resilience to bounce back from setbacks. It was only during the past week at Makers where I began to feel comfortable with the unknown and live out my experimental mindset. It was only by doing this that I felt like I was riding the wave. Learning quickly and reflecting deeply has been really effective to my own well-being and personal growth.

undraw_creative_experiment_8dk3.jpg

What’s it like to learn coding?

The fast-paced learning at Makers means I might be introduced to a concept (or even multiple concepts) in the morning and then apply them to solve problems in no time at all – iterating towards weekly goals. This took a bit of getting used to. Instead of waiting for the ‘perfect’ time to apply theory to practice, it was about being pro-active and jumping off the diving board, testing ideas out straight away. Having a boundless, experimental mindset makes learning engaging and is helping me and my peers to generate more innovative solutions – totally great for solving tricky problems.

At times, this required me to dig deep. I am a perfectionist at heart and this meant I felt uncomfortable when I did not understand the ‘whole’ concept straight away. Though, when I let go of this side of me, I experienced something called ‘Beast Mode’, this mode describes the feeling of being ‘uncomfortably excited’. This is the ideal state of mind to test ideas out without fear of failure. Having this experimental mindset is key to riding the wave with confidence, something I want to harness and continue.

It’s not just about writing code, it’s about the process

It is time to put the ‘human’ back into technology. It is so easy to think that technology is all about writing code like something from ‘The Matrix’, but it is all about the people and processes. Whether it is creating a sustainability application to improve people’s relationship with their environment, or an app to aid learning in museums, technology is empowering and transformative.

Here are a non-exhaustive list of some of the processes I encountered over the past couple of weeks and why I think they are important:

User Requirements

Great ideas are generated all the time, but how can these ideas translate into something tangible? It all starts with the User Story. User Stories, describe from the users’ point of view, what they want to do, why they want to do it and how they will achieve it. A User Story has a beginning, middle and an end. Agile teams use stories to flesh out user requirements, conducting proof of concepts, assessing the technical feasibility, sketching out designs and creating a feedback loop to improve the overall approach. Even when a feature is released to the end-user, more feedback can be sought to further improve the product and user experience. It takes a multi-disciplinary team to make this happen, not just developers.

undraw_user_flow_vr6w.jpg

Modelling/Diagramming/Object-Oriented Program Design

In technology, to solve a problem, you have to start by modelling the world you’re trying to simulate which is an abstraction of reality. It is impossible to model every single data flow/relationship between things perfectly and instead, you go for a model which captures enough to successfully help you understand things a bit better.

Diagramming describes the process of visualising the domain (the world you’re trying to model). There are plenty of tools and techniques available such as flowcharts, wireframing and class diagrams, just to name a few.

Test-Driven Development

Test-driven development is the process of developing code from the test first. At the start, it is a change in thinking, but it is such a powerful process. Rather than diving straight into the code, the aim is to write the test first which is derived from the business requirements and write the simplest code to pass that test. Once the test is satisfied, some refactoring is done to tidy up the code. This is called the RED-GREEN-REFACTOR loop. By writing the test first, this guides the development of the code to ensure alignment to the requirements.

Testing in Web Apps / User Experience

Combining test-driven development to a web application is like adding another layer to the cake. I’ve been using Capybara for feature testing on Sinatra applications. This enables testing from the user perspective as they navigate through the application and captures the business logic and acceptance criteria for satisfying the user requirements.

It is not uncommon to find combined testing approaches throughout the whole software development lifecycle. For example, in my Ruby-Sinatra application, I may start by writing a feature test which tests out the business logic and how the end-user may interact with the application, and on top of this, I may refine the logic further through unit tests which lead me to passing the feature test. So far, I’ve been using RSpec for Ruby and Jasmine for JavaScript. Of course, test-driven development is not the answer to everything and can be combined with other approaches to reflect and iterate on the requirements and code development. A powerful way of checking that the business acceptance criteria is satisfied is getting feedback from the users and undergoing a User Acceptance Testing (UAT) phase and building this into the Agile development process. Collaborating in a team, you may have to test your builds are working, as well as system integration testing and penetration testing for security purposes – so testing is not something to be underestimated!

Something I want to explore over the next couple of weeks are progressive web apps and how they improve user experience.

Pair Programming

By engaging in pair programming sessions, I have been able to learn from others and improve my ability to verbalise my thoughts and the confidence, respect and patience to work with others. Having experience working in an Agile team and tutoring, pair programming is very familiar to me; it’s been great to reinforce this at Makers.

undraw_pair_programming_njlp.jpg

“…being ‘uncomfortably excited’. This is the ideal state of mind to test ideas out without fear of failure. Having this experimental mindset is key to riding the wave with confidence.”

Thank you to Undraw for providing the images used in this blog! 🙂

Byeeeeeeee,

Kim