Hello, I'm Mateusz Roth. Versatile Software Engineer from 🇵🇱🇪🇺, specialized in JavaScript, TypeScript, Node.js, React. I'm eager to work with Node.js, Python, Golang, serverless or IoT.
Open-minded, friendly person that loves learning.

React Hooks

  • Hooks are a new addition in React 16.8. They let you use state and other React features without writing a class.
  • Hooks allow you to reuse stateful logic without changing your component hierarchy.
  • Without hooks mutually related code that changes together gets split apart different lifecycle methods. Hooks let you split one component into smaller functions based on what pieces are related (such as setting up a subscription or fetching data), rather than forcing a split based on lifecycle methods.

React Hooks advantages

  • hooks are easier to test (as separated functions) and make the code look cleaner/easier to read (i.e. less LOCs)
  • code that uses hooks is more readable and have less LOC (https://m.habr.com/en/post/443500/)
  • hooks make code more reusable / composable (also they don’t create another element in DOM like HOCs do)
  • you can define several seperate lifecycle methods instead of having all in one method
  • hooks are going to work better with future React optimizations (like ahead of time compilation and components folding) - components folding in future (https://github.com/facebook/react/issues/7323) - what means dead code elimination at compile time (less JS code to download, less to execute)
  • hooks show real nature of React which is functional, using classes make developer easier to do mistakes and use React antipatterns
  • hooks are very convenient to re-use stateful logic, this is one of their selling point. But this not applicable when app is built using some state management library and stateful logic doesn’t live in React components. So for desktop in its current state hooks are mostly for readability and to make it future-proof.
  • With HOCs we are separating unrelated state logic into different functions and injecting them into the main component as props, although with Hooks, we can solve this just like HOCs but without a wrapper hell (https://cdn-images-1.medium.com/max/2000/1*t4NuFEZWHcfPHV_f487GRA.png)

React Hooks vs HOCs and render props

Hooks list

  • basic hooks: useState, useEffect, useContext
  • additional hooks: useReducer, useCallback, useMemo, useRef, useImperativeHandle, useLayoutEffect, useDebugValue

Hook setState

  • if you call useState many times, you do it in the same order during every render
    • React relies on the order in which Hooks are called
    • React remembers initial order of calling hooks so we can’t conditionally add or remove any new hook
  • React will remember its current value between re-renders, and provide the most recent one to our function.
    1
    2
    3
    4
    5
    import React, { useState } from 'react';

    function Example() {
    // Declare a new state variable, which we'll call "count"
    const [count, setCount] = useState(0);
  • unlike the setState method found in class components, useState does not automatically merge update objects, so we have to manually set state for previous state values that we’re not intend to modify:
1
2
3
4
setState(prevState => {
// Object.assign would also work
return { ...prevState, ...updatedValues };
});
  • alternative to the setState hook is the useReducer hook, which is more suited for managing state objects that contain multiple sub-values and not simple primitives like numbers, strings.
  • setState - if the initial state is the result of an expensive computation, you may provide a function instead, which will be executed only on the initial render:
1
2
3
4
const [state, setState] = useState(() => {
const initialState = someExpensiveComputation(props);
return initialState;
});

Hook useEffect

  • it serves the same purpose as componentDidMount, componentDidUpdate, and componentWillUnmount in React classes
  • React will remember the function you passed (we’ll refer to it as our “effect”), and call it later after performing the DOM updates
  • Hooks let you organize side effects in a component by what pieces are related (such as adding and removing a subscription), rather than forcing a split based on lifecycle methods
  • you should call hooks at the top level of the render function, this means no conditional hooks:
1
2
3
4
5
6
7
8
9
10
11
12
13
// BAD:
if (user.isAdmin) {
useEffect(() => {
...
});
};

// GOOD:
useEffect(() => {
if (user.isAdmin) {
...
};
})
  • the function passed to useEffect is going to be different on every render. This is intentional. In fact, this is what lets us read the count value from inside the effect without worrying about it getting stale. Every time we re-render, we schedule a different effect, replacing the previous one
  • Unlike componentDidMount or componentDidUpdate, effects scheduled with useEffect don’t block the browser from updating the screen

    Cleaning subscriptions

  • we might want to set up a subscription to some external data source
  • In a React class, you would typically set up a subscription in componentDidMount, and clean it up in componentWillUnmount
  • that function what we return from our effect is the optional cleanup mechanism for effects
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    import React, { useState, useEffect } from 'react';

    function FriendStatus(props) {
    const [isOnline, setIsOnline] = useState(null);

    useEffect(() => {
    function handleStatusChange(status) {
    setIsOnline(status.isOnline);
    }
    ChatAPI.subscribeToFriendStatus(props.friend.id, handleStatusChange);
    // Specify how to clean up after this effect:
    return function cleanup() {
    ChatAPI.unsubscribeFromFriendStatus(props.friend.id, handleStatusChange);
    };
    });

    if (isOnline === null) {
    return 'Loading...';
    }
    return isOnline ? 'Online' : 'Offline';
    }
  • React performs the cleanup when the component unmounts. However, as we learned earlier, effects run for every render and not just once. This is why React also cleans up effects from the previous render before running the effects next time. We will discuss optimalization in next section.

Second parameter

  • a second argument to useEffect that is the array of values that the effect depends on, so in below example the effect will be executed only when props.source changes:
1
2
3
4
5
6
useEffect(() => {
const subscription = props.source.subscribe();
return () => {
subscription.unsubscribe();
};
}, [props.source]);
  • this way you can tell React to skip applying an effect if certain values haven’t changed between re-renders
  • this also works for effects that have a cleanup phase
  • If you want to run an effect and clean it up only once (on mount and unmount), you can pass an empty array ([]) as a second argument

Hook useContext

Hook useCallback

  • returns a memoized callback
  • will return a memoized version of the callback that only changes if one of the dependencies has changed
1
2
3
const memoizedCallback = useCallback(() => {
doSomething(a, b);
}, [a, b]);

Hook useMemo

  • If you’re doing expensive calculations while rendering, you can optimize them with useMemo:
1
2
3
4
5
6
const [c1, setC1] = useState(0);
const [c2, setC2] = useState(0);

// This value will not be recomputed between re-renders
// unless the value of c1 changes
const sinOfC1: number = useMemo(() => Math.sin(c1), [c1]);
  • useMemo is generalized version of useCallback hook. useMemo is primarily used for caching values but can also be used for caching functions as useCallback is used for because useCallback(fn, deps) is equivalent to useMemo(() => fn, deps):
1
2
3
4
5
6
// Some function ...
const f = () => { ... }

// The following are functionally equivalent
const callbackF = useCallback(f, [])
const callbackF = useMemo(() => f, [])

Extracting custom hook

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import React, { useState, useEffect } from 'react';

function FriendListItem(props) {
const [isOnline, setIsOnline] = useState(null);
useEffect(() => {
function handleStatusChange(status) {
setIsOnline(status.isOnline);
}
ChatAPI.subscribeToFriendStatus(props.friend.id, handleStatusChange);
return () => {
ChatAPI.unsubscribeFromFriendStatus(props.friend.id, handleStatusChange);
};
});

return (
<li style={{ color: isOnline ? 'green' : 'black' }}>
{props.friend.name}
</li>
);
}

to

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
mport { useState, useEffect } from 'react';

function useFriendStatus(friendID) {
const [isOnline, setIsOnline] = useState(null);

useEffect(() => {
function handleStatusChange(status) {
setIsOnline(status.isOnline);
}

ChatAPI.subscribeToFriendStatus(friendID, handleStatusChange);
return () => {
ChatAPI.unsubscribeFromFriendStatus(friendID, handleStatusChange);
};
});

return isOnline;
}
1
2
3
4
5
6
7
8
function FriendStatus(props) {
const isOnline = useFriendStatus(props.friend.id);

if (isOnline === null) {
return 'Loading...';
}
return isOnline ? 'Online' : 'Offline';
}

Sources

React - miscellaneous notes

General notes

Hints

  • treat components like pure functions and don’t modify props

Terms

Reconcililation

https://reactjs.org/docs/reconciliation.html

  • React implements a heuristic O(n) algorithm to compare rendered tree nodes
  • Two elements of different types will produce different trees
  • The developer can hint at which child elements may be stable across different renders with a key prop
  • When a component updates, the instance stays the same, so that state is maintained across renders
  • Keys should be stable, predictable, and unique. Unstable keys (like those produced by Math.random()) will cause many component instances and DOM nodes to be unnecessarily recreated, which can cause performance degradation and lost state in child components

Prop drilling

Passing props through a component tree, not a good pattern in most cases:

Props drilling example

Context

Context is a way to essentially create global variables that can be passed around in a React app so no prop drilling would be needed.

Context is often touted as a simpler, lighter solution to using Redux for state management.

Testing

Snapshot testing

  • To make our tests really useful, we should run them before each commit. So we can be sure, that we didn’t break anything accidentally. For example _husky_ package can help to setup and run pre-commit hooks.
  • The snapshot tests become really powerful when you are working on larger application or multiple developers are working on the same codebase.
  • Life examples when snapshot tests fail are available here in the section “Typical regressions”: https://medium.com/simply/snapshots-painless-testing-of-react-components-6bce3c4d51fc

[DRAFT] Frontend - miscellaneous notes

Styled Components

Pros of

  • styles are part of JavaScript (CSS-in-JS) in this case and allow you to build components from CSS snippets
  • with styled components you can build CSS dynamically based on props and state of the component
  • you don’t have to create tons of class names with related CSS declarations written somewhere else. Class names are generated automatically and you don’t have to care about them anymore.

Terms

Codemod

Codemod is a script run against a codebase for example in purpose of updating the codebase automatically for new changes in the API of an updated library.

Tree Shaking

// TODO

Bundle Splitting

// TODO

AMP

// TODO

Supported in Next.js.

Frontend integration tests

// TODO

Integration tests may be based on Selenium webdriver, which may be combined with chromedriver to test in headless Chrome.

Module/nomodule pattern

The module/nomodule pattern provides a reliable mechanism for serving modern JavaScript to modern browsers while still allowing older browsers to fall back to polyfilled ES5.

Node.js introduction - pros and cons, qualities

What is Node.js and its pros and cons

Most of the information below is taken from Netguru blog.

What is

Node.js’ famous asynchronous I/O and its single-threaded event loop models that can efficiently handle concurrent requests. Node.js is based on an event-driven, non-blocking I/O model, and uses only a single CPU core. The non-blocking IO system lets you process numerous requests concurrently.

  • That’s how we can conclude Node.js is single-threaded but in the background it uses multiple threads to execute asynchronous code with the help of the libuv library. Only the event loop is single-threaded.
  • https://medium.com/better-programming/is-node-js-really-single-threaded-7ea59bcc8d64 While Node.js itself is multithreaded – I/O and other such operations run from a thread pool – JavaScript code executed by Node.js runs, for all practical purposes, in a single thread. This isn’t a limitation of Node.js itself, but of the V8 JavaScript engine and of JavaScript implementations generally.
  • https://stackoverflow.com/a/40029919 The single threaded, async nature does make things complicated. But do you honestly think it’s more complicated than threading? One race condition can ruin your entire month! Or empty out your thread pool due to some setting somewhere and watch your response time slow to a crawl! Not to mention deadlocks, priority inversions, and all the other gyrations that go with multithreading.
  • https://stackoverflow.com/a/17959746 The main functionality differentiation between NodeJs based servers and other IIS/ Apache based servers, NodeJs for every connection request do not create a new thread instead it receives all request on single thread and delegates it to be handled by many background workers to do the task as required.
  • https://codeburst.io/how-node-js-single-thread-mechanism-work-understanding-event-loop-in-nodejs-230f7440b0ea

Node.js comes with many APIs suitable for backend development, e.g. the support for file systems, http requests, streams, child processes, etc. Browsers do offer some basic support for file systems or http requests, but those are usually limited due to security concerns.

More:

Pros

Nowadays it is possible to write both front-end and back-end of web applications in Javascript, making app deployment much easier and more efficient.

Node.js as your server technology gives your team a great boost that comes from using the same language on both the front end and the back end. This, means that your team is more efficient and cross-functional, which, in turn, leads to lower development costs.

Node’s ability to process many requests with low response times, as well as sharing things such as validation code between the client and server, make it a great fit for modern web applications that carry out lots of processing on the client’s side.

For SPAs apps Node.js would only return the index page (index.html) while data would be sent via REST interfaces and controllers implemented server-side. From the design point of view, such approach will ensure the clear separation of concerns (SoC) between models, controllers, and views with all data-related services implemented server-side.

Node.js is especially popular in real-time applications or when we seek a fast and scalable solution. Node.js. An event-driven, non-blocking server was a good solution for an instant propagation of updates, which required holding a lot of open connections.

In particular, Node has a powerful Event API that facilitates creating certain kinds of objects (“emitters”) that periodically emit named events “listened” by event handlers. Thanks to this functionality, Node.js makes it easy to implement server-side events and push notifications widely used in instant messaging and other real-time applications.

Node’s event-based architecture also works well with the WebSockets protocol that facilitates a fast two-way exchange of messages between the client and the server via one open connection. By installing WebSockets libraries on the server and the client side, you can implement real-time messaging that has lower overheads and latency, and faster data transfer than most other, more conventional, solutions.

IoT developers working in data-intensive scenarios can leverage the low resource requirements of Node.js. Low memory requirements allow for the easy integration of Node.js as software into single-board controllers such as Arduino, widely used for building digital devices that make up IoT systems. Finally, the Node community has been an early adopter of the IoT technology, creating over 80 packages for Arduino controllers and multiple packages for the Pebble and Fitbit wearable devices widely used in IoT systems.

Node.js offers fewer abstractions than for example ASP.NET, allowing developers to write code using a multitude of small components rather than configuring a vast number of parameters.

Node.js will prove useful in situations when something faster and more scalable than Rails is needed.

The Node.js developers community is a very active and vibrant group of developers who contribute to constant improvement of Node.js.

Cons

Node.js is not so good for developing CPU-intensive applications that involve the generation and processing of images, audio, or video. Being a single-threaded solution, Node.js may be become unresponsive and slow in processing large files. In this case, conventional multi-threaded solutions will be your best bet.

Node.js is not best choice for applications with a vast code base – since Java provides strongly typed sources, refactoring it and bug fixing will be more straightforward during its maintenance.

Infra

Node.js works perfectly with the leading cloud computing tools, keeps the infrastructure costs under control, and gives you access to the best services that predict usage spikes, expand resources and control the development process.

With AWS and Node.js you get a set of development tools that can predict and detect an increase in web application traffic and usage to automatically add virtual machines to meet the new requirements.

SERVERLESS

Node.js allows for serverless app development. The easiest way to do it is to use the Serverless framework powered by AWS. You can build apps directly in the AWS environment. All the processes take place there, so you don’t need DevOps, because everything is configured automatically. Your app’s developer writes code, uploads it to Serverless, and it’s all set up.

MICROSERVICES

Node.js is an excellent pick for the microservices architecture approach, which offers excellent scalability and stability.

Node.js with microservices significantly reduces application deployment time and enhances efficiency, maintainability, and scalability of your applications.

Using microservices, you build your app from separate small blocks that perform one function (e.g., checkout in an e-commerce app, product page, shopping cart, etc). Each block receives information, computes it, and delivers the result. You can add, multiply, and remove these elements according to your needs. This brings stability. First of all, if one element crashes, for instance, the checkout, the users who are browsing the other parts of the store will not notice it.

Building software with a microservices architecture is fast and easy, however, if you include too many blocks and too many relations among them, supporting and expanding such an architecture may become tricky when you put dozens of element together.

EXAMPLES

PayPal, a worldwide online payments system, has also moved their backend development from Java to JavaScript and Node.js. Beforehand, the engineering teams at the company were divided into those who code for the browser and those who code for the application layer, and it didn’t work perfectly. Then, full-stack engineers came to the rescue, but that model wasn’t ideal too. Adopting Node.js solved their problems, as it allowed for writing the browser and the server applications in the same programming language – JavaScript. As a result, the unified team is able to understand problems at both ends and react more effectively to the customer needs. Read more here about how smaller Node.js team started work 2 months after bigger Java team and catched them up: https://www.paypal-engineering.com/2013/11/22/node-js-at-paypal/

Sources

Advantages of development in Node.js

JavaScript CommonJS vs AMD

CommonJS

  • uses exports and require keywords
  • needs module identifier
  • not designed for browser
  • Browserify will let you use CommonJS in the browser
  • CommonJS require() is a synchronous call, it is expected to return the module immediately which does not work well in the browser
  • Node.js and RingoJS are server-side JavaScript runtimes, and yes, both of them implement modules based on the CommonJS Module spec

AMD

  • RequireJS implements it
  • suits web browser envs
  • supports asynchronous loading of module dependencies
  • define keyword
  • example:
    1
    2
    3
    4
    define('module/id/string', ['module', 'dependency', 'array'], 
    function(module, factory function) {
    return ModuleContents;
    });

JavaScript - performance / time measurement of function calls

You can use performance.now():

1
2
3
4
5
6
7
8
9
10
11
const t0 = performance.now();
const arr1 = [1, 2, 3, 4, 5];
arr1.unshift(0);
const t1 = performance.now();
console.log(t1 - t0);

const t2 = performance.now();
const arr2 = [1, 2, 3, 4, 5];
const arr3 = [0, ...arr2];
const t3 = performance.now();
console.log(t3 - t2);

Or console.time and console.timeEnd:

1
2
3
4
5
6
7
8
9
console.time('unshift');
const arr1 = [1, 2, 3, 4, 5];
arr1.unshift(0)
console.timeEnd('unshift');

console.time('destructuring');
const arr2 = [1, 2, 3, 4, 5];
const arr3 = [0, ...arr2];
console.timeEnd('destructuring');

JavaScript - `parseInt` vs `Number` comparision

1
2
3
4
parseInt(string, radix);

var a = new Number('123'); // a === 123 is false
var b = Number('123'); // b === 123 is true
  • parseInt takes 2 arguments

Number:

  • returns NaN for string 123sdfdsf
  • returns NaN for undefined, NaN and {}
  • returns 0 for false, null, "", " "
  • returns 1 for true

parseInt:

  • returns 123 for string 123sdfdsf
  • returns NaN for all false, null, "", " ", true, {}, undefined and NaN
  • parseInt preferable for inputs where user can leave some additional characters

Why the test pyramid is a bullshit - guide to testing towards modern frontend and backend apps


image source: https://martinfowler.com/bliki/TestPyramid.html

TL;DR; The shape and levels of the test pyramid highly depends on your application (and it mustn’t be a pyramid!) but there are known anti-patterns.

In this guide I want to address the types of testing of web apps especially in the face of the increase of the popularity of Single Page Applications and microservices, the diversified terminology and how we can refer to the well-known test pyramid which will be also covered here. I want to answer question what’s the good approach to test frontend and backend apps that are created by us today but in general and not in relevance to any specific tools.

After reading this article I hope you’ll find why following patterns it’s not always a good solution. I tried to make a precise research about it but if you see anything that can be improved or where I got wrong, please write a comment about it.

Intro — purpose of testing

We test to verify that a software we develop meets client’s requirements and works as expected. More often to just check if we hadn’t broken something due to our last changes in codebase. Although, when our software is getting more complex, we are not able to test all possible broken cases manually. I’d say, we are able to test it manually but it’s so time-consuming and boring that for sure it will lead to overlook a lot of bugs because of the human factor. That’s why we start to automate and write them. It gives us confidence that we made no mistake in the path of a use case and the automation speeds up the whole process. Both developers and QA engineers write tests. Anyway, we still need to do manual tests because not everything can be covered with automation, so we have also QA manual testers.

The (not)well-known three

Before we start to talk about the test pyramid let’s introduce the basic three types of tests used within the test pyramid.

Unit tests

Unit tests are mostly written by developers and are done in the manner of white-box testing. We have to keep them simple, easy to debug and isolated which means less cases to cover by testing a one method/function at the same time. If we for example test 3 functions at the same time which returns different booleans values then it may give us 222 = 8 different cases to test. Unit tests gives us fast feedback because units are tested separately in isolated environment with mocked dependencies.

But what the heck is a unit?

That is highly dependent on adopted approach to programming. In object-oriented programming (OOP) a unit mostly means a class and main approach is to test public methods of classes. We don’t reason about private methods which are used by the public ones because we treat a class like an indivisible unit. But there are also approaches where people treat single methods as units and that also may be fine in some cases. In the functional programming (FP) we test all functions separately as a function is our smallest unit. We just want to make sure that depending on parameters which we used to call our function it then returns the expected values.

However I think there is no one correct answer. It’s just fine to have agreed on a one term used within your team and stick to it. For me, what we call a unit, it’s just an agreement.

Good advice from Ham Vocke on the Martin Fowler’s website is to don’t reflect internal code structure within unit tests and tests only for behavior that is observable, so think:

if I enter values x and y, will the result be z?

instead of

if I enter x and y, will the method call class A first, then call class B and then return the result of class A plus the result of class B?

so we can easily refactor and make changes within our codebases.

Often we tend to have test coverage as high as it is needed to fell confident that our codebase is not highly liable to bugs. I here recommend Kent Beck opinion about how thorough our tests should be.

source https://auth0.com/blog/testing-react-applications-with-jest/

Integration tests

That’s a type of tests that makes the most confusion for me. Let’s say we have a React app, what integration tests should cover? What in the case of a monolith serverside app? What in the case of microservices architecture? Is writing some automatic tests on our app’s UI the integration testing? Some definitions say that integration tests are for processes and components but what exactly is a process or a component? They have different meanings regarding an app we’re currently testing and there is no clear answer. We can do integration tests of our whole environment (client, server side, database, etc.) but it tends to be called the end to end testing. For me testing few main parts of whole system together like database and server was integration testing and that’s how I’ve been thinking of them a long time from what I’ve learned on university studies and what you’ll find mostly on the Internet. However, in the microservices world we often tend to write tests of integration between our several serverside-services apps. Hence, some people call integration tests as components or services tests. I feel there is no one right term since there are lot of variations and they are mostly related to the architecture of a system. One approach is to test a lot of parts of our system at once and another just to test two parts of a system at once. One part can be from external service but it’s used by our system and needs it to work properly so it’s an inseparable part of our system and needs to be tested just the same as the parts that we created ourselves. We can still treat integration tests as white box tests since we test components of our box like server and database which are the part of our whole box — so we know something about the structure of our box. Still, some people call them black box and that’s still may be fine as we treat for example backend and database as something we don’t know implementation details. Let’s postpone more concerns about them until we I think through the test pyramid.

End to end tests (E2E)

Called also sometimes UI tests but not always meaning the same. In the case of web apps end to end tests are performed on working instance of our application what involves also the serverside part. They are usually executed in a browser and they perform operations that simulates popular user paths within our app.

Opposite to unit and integration tests they don’t give us exact feedback. They just inform us overall what parts of UI fail no referring back to the frontend or the backend code. Each failing part needs to be manually checked first for errors before reporting a bug. That tests are hard to debug and slow, so we often don’t cover the whole app with them. They are a lot more expensive because writing an automation takes 10 to 50 times more than writing a scenario of a manual test.

But basically what differs them from integration tests? We still test integration between parts of our system — modules of our code, UI with frontend logic, frontend integration with backend. Is it different category or just more general type of integration tests? I’ll leave you with those doubts ;)

The test pyramid

The test pyramid was first mentioned in Mike Cohn’s book Succeeding with Agile. He recommends that applications should be covered by a lot of unit tests which is the base of our tests. Then we write integration (service) tests and the peak is made from E2E (UI) tests. These are all of the type functional and automated tests.

image source: https://martinfowler.com/bliki/TestPyramid.html

I like also to look at the test pyramid upside down as a bug filter where bugs not detected on a one stage are going to be detected on the next stage.

the testing pyramid as a filter, image source: https://twitter.com/noahsussman/status/836612175707930625

The test pyramid was the answer for the issue of having a lot of E2E/UI tests that were for example recorded by a special software. They are causing a definite increase time of building, are really brittle — prone to every change in the UI and gives deficient feedback about encountered errors. So you should write much more unit tests which are faster and then you catch other on a thin layer of E2E tests which cover as much main features as they can. The anti-pattern where our codebase is barely covered with unit tests make a shape of an ice cream cone which is overflowing with manual and automated GUI tests. Unfortunately, it’s a common situation in many projects. Effects are tragic, after few years of no writing tests adding 50 lines of code make them as much time-consuming as adding 5000 or 50000 lines in the past what is painful for developers and not understood by business. Release process takes longer due to slow GUI testing and still lets slip bugs that causes lot of frustration for end users who finally will abandon your product. Finally, inevitably the development of the app will be suspended and all sources addressed for bug fixing and refactoring. It’s your responsibility to write as much unit tests early on in the project. Although in cases of pressure from upper management, it’s your duty to fight for paying off technical debt as early as you can negotiate to prevent such situations.

the ice-cream cone anti-pattern, image source: https://james-willett.com/2016/09/the-evolution-of-the-testing-pyramid/

As you can see in the diagram of the ice cream cone anti-pattern, there is also the addition of manual tests to the original test pyramid. Currently, it’s often pattern to extend the original test pyramid which is fully automated by adding some manual tests. I believe manual tests are necessary to test newly introduced bugs and some cases that can’t be covered by automated tests.

image source:
https://james-willett.com/2016/09/the-evolution-of-the-testing-pyramid/

As the service tests layer is split often into 3 layers in object-oriented serverside apps: API, Integration and Component, how can we treat modern frontend apps and is there any right pyramid for that?

Frontend testing


two-level test pyramid may be a solution for frontend apps

The pyramid where we have only two layers: unit and E2E tests with isolated integration tests seems pretty straightforward and still may be accurate to some types of apps which have no services to integrate. It may happen that there are needed some integration tests but there will be less of them than E2E tests. In this approach, while testing a frontend app, we treat logic, stores and UI components always as separated units. However, there is also another “pyramid” worth mentioning, that was presented by Kent C. Dodds is the testing trophy:

The Testing Trophy for frontend apps, source: https://twitter.com/kentcdodds/status/960723172591992832/photo/1

By static tests he means code linters and formatters and also type checkers (in JavaScript he mentioned flow but you can use TypeScript instead). They catch typos and type errors as you write and modify code. In this approach he proposes to write more integration tests between components and logic because these two are often brittle. A button in a component works but action it’s handling and mutating some data may not. He says integration tests strike a great balance on the trade-offs between confidence, speed and expense. Is it good approach? Does it look for you like the universal solution for frontend apps nowadays? I believe this style of testing may work for some apps, but for others not. Without understanding your codebase, common issues and then deciding about the approach to testing, following these patterns may make you go wrong way.

Microservices testing

More modern software development organisations have found ways of scaling their development efforts by spreading the development of a system across different teams. Individual teams build individual, loosely coupled services without stepping on each others toes and integrate these services into a big, cohesive system. The more recent buzz around microservices focuses on exactly that.

The Practical Test Pyramid, Ham Vocke, https://martinfowler.com/articles/practical-test-pyramid.html

Another pyramid which I encountered during my research was related to microservices architecture. André Schaffer in his article on Spotify Labs blog presented the microservices testing honeycomb:

The Microservices Testing Honeycomb, source https://labs.spotify.com/2018/01/11/testing-of-microservices/

In this approach we focus on integration tests. We want to be sure that our services work together well and the implementation details of them are not so important. They should be easy to change and refactor without causing bugs in another services. He mentions that trade-off is a decrease in execution of tests however he believes the time is paid off by a faster coding and ease of maintenance. I’d say that in this approach we treat services as units. The type of tests where we test APIs between services we call contract tests.

Summary

As we went through several types of tests pyramid, I almost sure you’ve noticed the correlation between tests and the app we want to test. And basically it’s the essence: we shouldn’t follow any patterns before thinking and understanding what we want to test. However, writing tests is crucial. As fast as you understand it, your life will be more pleasant in case of development of your growing app.


Sources:

[DRAFT] How to test extended classes in JavaScript

  • https://blog.arkency.com/2014/09/unit-tests-vs-class-tests/
  • https://martinfowler.com/bliki/UnitTest.html
  • testowanie tylko “publicznych” metod klas/obiektów pozwala zaoszczędzić czas na refaktor poprzez brak konieczności zmiany istniejących testów, zapewniając że główna logika API działa dobrze — Existing tests not only helped him to assert correctness of the new solution but more importantly they did not stand in his way during the refactoring. As he admitted, he wouldn’t get it done as quickly as he did had he had to address failing unit tests throughout the process.
  • test “class” units because pragmatically it makes our tests much more stable — like, you don’t need to change test if you extracted something to a method or splitted one coupled class to two. Do you really want to test against structure of your code, or against what the code does? I choose latter solution.
  • About the “Integration Test” and “Unit Test” from Martin Fowler:
    “Object-oriented design tends to treat a class as the unit, procedural or functional approaches might consider a single function as a unit. But really it’s a situational thing — the team decides what makes sense to be a unit for the purposes of their understanding of the system and its testing. Although I start with the notion of the unit being a class, I often take a bunch of closely related classes and treat them as a single unit. Rarely I might take a subset of methods in a class as a unit. However you define it doesn’t really matter.” https://martinfowler.com/bliki/UnitTest.html
  • Tests are meant to protect you from regressions and you don’t get that value when you need to update the tests whenever the implementation changes. For the same reason it destroys the point of Red-Green-Refactor approach.
  • Now, would I see value in having a test for the Pricing class directly? Having more tests is good, right? Well, no — tests are code. The more code you have the more you need to maintain. It makes a bigger cost. It also builds a more complex mental model. Low-level tests are often causing more troubles than profit.

[DRAFT] When a JavaScript Developer steps into the object-oriented world | Classical vs Prototypal Inheritance

Benefits of prototypal inheritance over classical?, Aadit M Shah, https://stackoverflow.com/a/16872315

  • delegation inheritance via proto
  • concatenation inheritance through copying object properties

Why Prototypal Inheritance Matters, Aadit M Shah, http://aaditmshah.github.io/why-prototypal-inheritance-matters/

Prototypal Inheritance, Douglas Crockford, http://crockford.com/javascript/prototypal.html

Differential inheritance, Wikipedia, https://en.wikipedia.org/wiki/Differential_inheritance

JavaScript The good parts, Douglas Crockford, https://www.amazon.com/dp/0596517742/?tag=stackoverflow17-20

JavaScript For Beginners: the ‘new’ operator, Brandon Morelli,
https://codeburst.io/javascript-for-beginners-the-new-operator-cee35beb669e

JavaScript: The Keyword ‘This’ for Beginners https://codeburst.io/javascript-the-keyword-this-for-beginners-fb5238d99f85

old vs new JS inheritance class: https://medium.com/beginners-guide-to-mobile-web-development/super-and-extends-in-javascript-es6-understanding-the-tough-parts-6120372d3420

JS class inheritance: https://javascript.info/class-inheritance

https://stackoverflow.com/a/18557503 in this article:

    • why encapsulation is good presented on an example of online shop (discounts, changed to max 80%, for some resellers 90%)
    • This is also a bad design, a boilerplate antipattern
  • property vs private member/field
    • In C#, transforming fields into properties is a breaking change, so public fields should be coded as Auto-Implemented Properties if your code might be used in separatedly compiled client.
  • In Javascript, the standard properties (data member with getter and setter described above) are defined by accessor descriptor (in the link you have in your question). Exclusively, you can use data descriptor (so you can’t use i.e. value and set on the same property).

new keyword vs Object.create: https://stackoverflow.com/questions/4166616/understanding-the-difference-between-object-create-and-new-somefunction

How to not use prototype, _generalization is always better than specialization_: https://stackoverflow.com/a/21807662

Classical inheritance in object-oriented system like C# and Java

I believe you already heard about classes and you know some basics like new objects are created during instantiation from classes. But classical inheritance is much more complicated and have terms that not exists in JavaScript:

  • classes
  • objects
  • interfaces
  • abstract classes
  • final classes
  • virtual base classes
  • constructors
  • destructors

They were defined to achieve terms like:

  • encapsulation
  • inheritance
  • polymorphism

https://stackoverflow.com/questions/816071/prototype-based-vs-class-based-inheritance
Object oriented:

  • encapsulation
  • inheritance
  • polymorphism
  • in a “class-based” language, that copying happens at compile time
  • in prototypal, there is no copying, we have reference to prototype and it’s methods
  • instantiation vs copying reference to prototype

Prototypal inheritance

In a prototypal system, objects inherit from objects by the mechanism which is looking for methods in __proto__. The __proto__ property is created on object when instantiated by using new keyword. When you define a new function or class, it has prototype defined instead.

For example, let’s say we have Woman class which inherits from Human class in JavaScript. When you instantiate a Woman object then it would have __proto__ pointing to Human. All class members defined in Human but not in Woman will be accessible in the instance of the Woman class because the object will look first in it’s own members for example for invoked method and if it is not present, it will go to __proto__ and check if it’s is there, if not then it would go to __proto__ of __proto__ and so on.

Diamond problem

https://en.wikipedia.org/wiki/Multiple_inheritance#The_diamond_problem

“Instantiation” in JavaScript

First way to instantiate an object is to do it from constructor function. It’s a function like every other function but we can define for it the prototype property which will be used as a __proto__ property for newly created object by using new keyword.

Ok, let’s slow down. First look how we declare constructor function:

function Person() {}`</pre><!--kg-card-end: code-->

Yes, you’re right. It’s a function like every other function. Good practice is to start constructor function name from upper case to distinguish it from normal function but it’s not necessary.

What is prototype of this function? By default it’s object which inherits from `Object` , so it’s `__proto__` property is set to `Object` . Also it has additional property `constructor`:
<!--kg-card-begin: code--><pre>`Person.prototype
// { constructor: f, __proto__: Object }`</pre><!--kg-card-end: code-->

But what is `constructor`? It’s property which points to our function which in our case is `Person` :
<!--kg-card-begin: code--><pre>`{
    constructor: Person,
    __proto__: Object
}`</pre><!--kg-card-end: code-->

I believe you already understand what’s the purpose of `__proto__` . But what’s the purpose of `constructor` ? It will just say us what function was used to create

So let’s create a simple constructor function:
<!--kg-card-begin: code--><pre>`function Person(name) {
  this.name = name;
  console.log(this);
  console.log(this.prototype);
  console.log(this.__proto__);
}
Person.prototype.sayHello = function () { console.log(this.name); };
const matt = new Person('Matt');
matt.sayHello();
// 'Matt'`</pre><!--kg-card-end: code-->

`prototype` is an object with `constructor` property that references to  the constructor function (`Person`). Every function has prototype with this property which points to that function:
<!--kg-card-begin: code--><pre>`function getHello () { return 'Hello World' };
getHello.prototype.constructor === getHello;
// true`</pre><!--kg-card-end: code-->

`prototype` is not available on the instances themselves (or other objects), but only on the constructor functions.

The `constructor` property returns a reference to the `Object` constructor function that created the instance object.
<!--kg-card-begin: code--><pre>`function Person(name) {
  this.name = name;
}
var person = new Person('Matt');
person.constructor
// function Person...`</pre><!--kg-card-end: code-->

`Object.create` creates a new object which has it’s prototype set to the passed in object, i.e.:
<!--kg-card-begin: code--><pre>`const obj = {};
const obj2 = Object.create(obj);
obj2.__proto__ === obj;
// true`</pre><!--kg-card-end: code-->

The `Object.create` function untangles JavaScript’s constructor pattern, achieving true prototypal inheritance.

So instead of creating classes, you make prototype objects, and then use the `Object.create` function to make new instances.
<!--kg-card-begin: code--><pre>`class Person {
 constructor(name) {
   this.name = name;
   console.log(this);
   console.log(this.prototype);
 }
 sayHello() {
   console.log(this.name);
 }
}`</pre><!--kg-card-end: code-->

— -

function Person(name) {
 this.name = name;
 console.log(this);
 console.log(this.prototype);
}
Person.prototype.sayHello = function() {
 console.log(this.name);
}
Person(‘Matt’); // this wskazuje na window, przypisuje zatem ‘Matt’ do window.this
new Person(‘Matt’); // this wskazuje na aktualny kontekst funkcji Person
Object.create(Person); // tworzy kopię obiektu — funkcję, która jest kopią funkcji Person — prototypem jest funkcja Person
Object.create(Person.prototype); // tworzy kopię obiektu na podstawie prototypu — prototypem jest obiekt Person
 — -

function Person(name) {
 this.name = name;
 console.log(this);
 console.log(this.prototype);
 this.sayHello = function() {
 console.log(this.name);
 }
}

- new zatem tworzy obiekt, który wskazuje na kontekst funkcji jako this
- bez new, nie jest utworzony prototyp, a this wskazuje na window

Person(‘matt’)
&gt;&gt; Uncaught TypeError: Class constructor Person cannot be invoked without ‘new’
typeof Person
&gt;&gt; “function”
const matt = Object.create(Person(“Matt”))
&gt;&gt; Uncaught TypeError: Class constructor Person cannot be invoked without ‘new’

* klasa ma typ funkcji
* nie można wywoływać klas bez użycia `new`
* klasa zwraca nowy obiekt
* można utworzyć kopię obiektu zwracanego przez klasę za pomocą Object.create, tak skopiowany obiekt nie ma prototypu/this jeśli klasa go nie miała

encapsulation
polymorphism

## ES6 class vs old function constructor approach

Let's say we have a `Human` class:
<!--kg-card-begin: code--><pre>`class Human {
  constructor (name) {
    this.name = name; 
  }
}

const human = new Human('Homo sapiens');`</pre><!--kg-card-end: code-->

As function this would be:
<!--kg-card-begin: code--><pre>`function HumanFn(name) {
  this.name = name;
}
const humanFn = new HumanFn('Homo sapiens');`</pre><!--kg-card-end: code-->

As you can see, in the function approach we can instantiate an object from a function. Basically what `new` operator does in JS is it is creating a new object, sets the constructor of the object to the function (which is `HumanFn`), calls the constructor with the context of the newly created object and returns `this` if the constructor (function `HumanFn`) does not return nothing otherwise the thing that the constructor returns.

Let's compare them once more in deep:
<!--kg-card-begin: code--><pre>`class Person {
    constructor(name) {
        this.name = name;
    }
}

const matt = new Person('Matt'); // works
const john = new matt.constructor('John'); // works the same as above
const tom = matt.constructor('Tom'); // error `Class constructor Person cannot be invoked without 'new'``</pre><!--kg-card-end: code-->

In function approach it is slightly different:
<!--kg-card-begin: code--><pre>`function Person(name) {
    this.name = name;
}

const matt = new Person('Matt'); // works
const john = new matt.constructor('John'); // works as above
const tom = Person('Tom'); // tom is undefined
const tom2 = matt.constructor('Tom'); // tom2 is undefined`</pre><!--kg-card-end: code-->

The difference is in line related to `tom`. `Person` is called as a function. Context of the invoked function is set to `window` instead of to a new object created by the `new` keyword as in previously created `matt` or `john`. 

## Inheritance comparision

We have ES6 class extended:
<!--kg-card-begin: code--><pre>`class Human {
  constructor (name) {
    this.name = name;
  }
  sayName () { console.log(this.name) };
}

class Woman extends Human {
  gender = 'female'
}

const eva = new Woman('Eva');`</pre><!--kg-card-end: code-->

We do not need constructor above because if you do not specify a constructor method a default empty constructor is used. For derived classes, the default constructor is:
<!--kg-card-begin: code--><pre>`constructor(...args) {
  super(...args);
}`</pre><!--kg-card-end: code-->

As function this would be:
<!--kg-card-begin: code--><pre>`function HumanFn(name) {
    this.name = name;
}
HumanFn.prototype.sayName = function() { console.log(this.name) }

function WomanFn(name) {
      HumanFn.call(this, name);
      this.gender = 'female';
}
WomanFn.prototype = Object.create(HumanFn.prototype);

const evaFn = new WomanFn('Eva');`</pre><!--kg-card-end: code-->

Instead of using `extends`, you need to set `WomanFn` prototype manually.

## Interesting info about JS classes
  • you can’t use call on not instantiated class, it will return error Class constructor Human cannot be invoked without 'new'
  • static methods can be called only on not instantiated class, no need to instantiate
  • static methods are not available on instantiated class
  • static methods has no access to this, but we can call them for example by using Human.sayHello.call(instantiatedHumanObject)
  • As stated, if you do not specify a constructor method a default empty constructor is used. For derived classes, the default conxstructor is:
    `constructor(...args) {
    super(...args);
    }

Object methods

  • Object.defineProperty(objectOrContext, propertyName, propertyDescriptors)* descriptors: configurable, enumerable, writable, set, get, value
  • accessors: get, set
  • Object.setPrototypeOf(obj2, obj1)
  • Object.getPrototypeOf(object)
  • Object.getOwnPropertyNames(object)
  • Object.getOwnPropertyDescriptor(object, property)
  • Object.defineProperties(object, descriptors)
  • prevents adding properties to an object: Object.preventExtensions(object), Object.isExtensible(object)* prevents changes of object properties (not values): Object.seal(object), Object.isSealed(object)
  • prevents any changes to an object: Object.freeze(object), Object.isFrozen(object)