<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>dnlytras.com RSS</title>
        <link>https://dnlytras.com</link>
        <description>Software developer. Building web applications with React, TypeScript &amp; Elixir.</description>
        <lastBuildDate>Wed, 08 Apr 2026 20:22:30 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>Patience is a virtue</generator>
        
        <copyright>All rights reserved 2026, Dimitrios Lytras</copyright>
        <item>
            <title><![CDATA[My modest use of AI for programming]]></title>
            <link>https://dnlytras.com/blog/programming-with-ai</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/programming-with-ai</guid>
            <pubDate>Tue, 30 Sep 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[How I currently use AI in my workflow]]></description>
            <content:encoded><![CDATA[<p>I don&#x27;t use AI suggestions anymore, I find them distracting.</p>
<p>Instead, I prefer to use an agent in the terminal to do rough sketching for a feature, and fill the gaps myself in a distraction-free environment. I might even ask for validation (&quot;be critical, not a sycophant&quot;) for an approach, without requesting any code changes. For people like me who work in isolation and build features e2e, having an assistant that operates this way is very helpful.</p>
<p>Of course, I&#x27;ll delegate some chore work, but in principle I want to write my own code.</p>
<p>I find that this works best for me, because it avoids the three deadly AI sins:</p>
<ol>
<li><strong>Making me lazy</strong>. I take pride in my work and don&#x27;t want to push sloppy code.</li>
<li><strong>Causing my skills to atrophy</strong>. I don&#x27;t want my brain to switch off. The model might have good ideas, but most of the time I have better ones. These come from real-world experience, and that&#x27;s what differentiates me from some other AI operator.</li>
<li><strong>Making me lose the feeling of ownership</strong>. This is where teams fail. No one wants to take ownership, stuff falls through cracks, and bad things happen. In contrast, when I write the code, I know exactly why I did it this way, and it&#x27;s my responsibility to maintain it.</li>
</ol>
<p>So this is where I draw the line with AI in my workflow. Things might change, but I get a significant productivity boost without noticeable drawbacks.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Rails, Webpacker & React: A migration odyssey]]></title>
            <link>https://dnlytras.com/blog/react-webpacker-migration-odyssey</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/react-webpacker-migration-odyssey</guid>
            <pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Looking back at a painful refactor journey from Webpacker to Vite, Enzyme to React Testing Library, and React]]></description>
            <content:encoded><![CDATA[<p>When joining a new team, you often inherit a codebase that has been around for a while. It&#x27;s easy to look elsewhere, ignore the warts, and fix little typos here and there.</p>
<p>I find the best kind of onboarding (if the team allows it), is to look at the long-standing issues and try to fix them. It helps you understand the codebase much better. That&#x27;s what I did.</p>
<h2>The Rails/Webpacker/React crossroad</h2>
<p>Last year, I was facing the following issue in a codebase that was running on Rails and React:</p>
<ul>
<li><a href="https://github.com/rails/webpacker">Webpacker</a> was used to bundle the React code. But it was discontinued in Rails 7 without a clear and easy upgrade path to anything</li>
<li>With Webpacker in limbo, the Node version was stuck on 16.x, with 22.x being the latest LTS</li>
<li>The tests were written with <a href="https://github.com/enzymejs/enzyme">Enzyme</a>, which was abandoned after 2021</li>
<li>Enzyme was blocking the React upgrade, as it was incompatible with React 18.x</li>
<li>The legacy context API was used</li>
<li><a href="https://github.com/apollographql/apollo-client">Apollo Client</a>, <a href="https://github.com/jestjs/jest">Jest</a>, and <a href="https://github.com/mswjs/msw">MSW</a> were outdated as well</li>
</ul>
<p>Essentially, we had two big blockers.</p>
<p>Node was blocked by Webpacker, and React was blocked by Enzyme. Everything else was blocked due to the first two.</p>
<h2>Looking for a Webpacker replacement</h2>
<p>While the JavaScript ecosystem is often messy, nothing prepared me for this letdown from the Ruby world. I was also slightly annoyed that DHH celebrated moving away from Webpack as a win, while teams were left holding the hot potato.</p>
<p>Anyway, the last Webpacker version was <code>v5</code>. There was no <code>v6</code> released except for a <a href="https://github.com/rails/webpacker/releases/tag/v6.0.0.rc.6">release candidate</a> <code>(v6.0.0-rc.6)</code>.</p>
<p>If you wanted to keep your webpack config, the two options were <a href="https://github.com/shakacode/shakapacker"><code>shakapacker</code></a> &amp; <a href="https://github.com/rails/jsbundling-rails"><code>jsbundling-rails</code></a>.</p>
<p>I initially looked at shakapacker, and the steps I had to take were:</p>
<ul>
<li>Upgrade from webpacker <code>v5</code> to <code>v6.0.0-rc.6</code> (handle whatever breaking changes there were)</li>
<li>Migrate from webpacker <code>v6.0.0-rc.6</code> to shakapacker <code>v6</code> (can&#x27;t upgrade from webpacker <code>v5</code> directly)</li>
<li>Upgrade to shakapacker <code>v6.5.2</code> (there were a few changes needed)</li>
<li>Upgrade to shakapacker <code>v7</code> (a lot of breaking changes)</li>
</ul>
<p>That said, I didn&#x27;t go into that rabbit hole. Upgrading to <code>v6.0.0-rc.6</code> was breaking already as we were on a newer Ruby version (<code>v3.3.0</code>).</p>
<pre><code class="language-ruby">bundle exec rails webpacker:install

       apply  /Users/dnlytras/.asdf/installs/ruby/3.3.0/lib/ruby/gems/3.3.0/gems/webpacker-6.0.0.rc.6/lib/install/template.rb
   identical    config/webpacker.yml
   identical    package.json
  Copying webpack core config
       exist    config/webpack
   identical    config/webpack/base.js
   identical    config/webpack/development.js
   identical    config/webpack/production.js
   identical    config/webpack/test.js
bin/rails aborted!
NoMethodError: undefined method `exists?&#x27; for class Dir (NoMethodError)

if Dir.exists?(Webpacker.config.source_path)
      ^^^^^^^^
Did you mean?  exist?
</code></pre>
<p>What a mess. I decided then, that <strong>even if I kept the webpack config, I still wouldn&#x27;t want to maintain that pipeline</strong>. As much as I was annoyed with the rug pull, I never liked Webpack.</p>
<p>So, I opted for a move away from Webpack to Vite. I&#x27;ve used Vite before, and I was very happy with it. The only issue here is if there&#x27;s a good integration with Rails.</p>
<blockquote>
<p>I&#x27;m not a Ruby developer; this was my first Rails project. I had to learn a lot of things on the go. The team also had created an issue to migrate off Webpacker. So I wasn&#x27;t being a weird newcomer suggesting to change the whole stack. I just took the opportunity to do it.</p>
</blockquote>
<h2>Moving from Webpacker to the Vite &amp; Vite-Ruby</h2>
<p>To my surprise, I found <a href="https://vite-ruby.netlify.app/guide/introduction.html"><code>vite-ruby</code></a> that has great integration with Rails, and gives a few pointers on <a href="https://vite-ruby.netlify.app/guide/migration.html#webpacker-%F0%9F%93%A6">how to migrate from Webpacker</a>.</p>
<p>Eventually, I made it work. I don&#x27;t want to expand much on the details, but it was mostly:</p>
<ul>
<li>Writing a new Vite config</li>
<li>Moving from Webpacker packs to Vite entrypoints</li>
<li>Trimming down unused dependencies</li>
<li>Finding <a href="https://github.com/ElMassimo/stimulus-vite-helpers">workarounds</a> for some stimulus controllers to work with Vite</li>
<li>Upgrading node versions in CI and locally, unlocking <code>fetch</code></li>
<li>Enabling fast refresh (no more full page reloads!)</li>
<li>Dropping most of the Babel plugins, and only keeping what was needed (mostly for Jest)</li>
<li>Enabled code splitting and lazy loading, reducing the bundle size by 60%. (Could also be done with Webpack, but it wasn&#x27;t set up)</li>
</ul>
<p>I hit pause for a few days to let the dust settle and monitor production. By that time, I had pretty much tested every flow in the app - what an onboarding experience! Nothing of note happened, so I label this a great success.</p>
<p>The only thing I missed was enabling <code>emptyOutDir</code> in the Vite config. As a consequence I would keep the outdated assets around, but that was an easy fix.</p>
<pre><code class="language-js">import react from &#x27;@vitejs/plugin-react&#x27;;
import {defineConfig} from &#x27;vite&#x27;;
import commonjs from &#x27;vite-plugin-commonjs&#x27;;
import RubyPlugin from &#x27;vite-plugin-ruby&#x27;;

export default defineConfig({
  plugins: [
    RubyPlugin(),
    commonjs(),
    react({
      babel: {
        plugins: [&#x27;&#x27;],
      },
    }),
  ],
  build: {
    emptyOutDir: true, // this was missing
    commonjsOptions: {
      transformMixedEsModules: true,
    },
    optimizeDeps: {
      include: [&#x27;&#x27;],
    },
  },
});
</code></pre>
<h2>Moving from Enzyme to React Testing Library</h2>
<p>With hot-reload working, it was time to work on the React side of things. My team was using Enzyme for testing, but had already started writing new tests with React Testing Library (RTL).</p>
<p>As you might imagine, the migration process is very frustrating. Enzyme tests the implementation details, while React Testing Library tests the user experience, simulating normal user actions.</p>
<p>So let&#x27;s say you&#x27;re testing a chart component. Enzyme would check if the props are passed correctly, and if the lifecycle methods are called. You can change all props at will and simulate any scenario. With that out of the way, you face the harsh truth that your tests might have blind spots. To test the same component with RTL, you have to manually hover over the lines and bars, check the tooltips, axis calculations, and so on. Essentially you rewrite the whole test, there&#x27;s zero overlap.</p>
<p>I started migrating a few tests per day. This unearthed a few bugs in the codebase (mostly due to the shallow rendering of Enzyme), but it gave me confidence that we actually test the right things.</p>
<p>It wasn&#x27;t hard per se, but more of a tedious task that someone had to tackle consistently bit by bit. I also started experimenting with LLMs at that time, but I found that currently available models couldn&#x27;t provide meaningful assistance for this task. Pain.</p>
<h2>Upgrading React to v18</h2>
<p>With Enzyme gone, the React upgrade was unblocked.</p>
<p>To my dismay though, I found a blocker for React 19. <a href="https://react.dev/blog/2024/04/25/react-19-upgrade-guide#removed-proptypes-and-defaultprops">The PropTypes were discontinued</a>. Moving to TypeScript is not an option for everyone, especially when your team writes the business logic in a dynamic language like Ruby. <a href="https://github.com/facebook/react/issues/28992">So I feel like React is dropping the ball here.</a></p>
<p>Anyway, that&#x27;s a problem for the future, let&#x27;s move on to React 18 first. Some of the changes I made was:</p>
<ul>
<li>Dropping the legacy context API</li>
<li>Debugging some issues (hint: there were <code>useEffect</code> hooks returning <code>null</code> instead of <code>undefined</code>)</li>
<li>Rewriting a few critical class components that were using <code>UNSAFE_componentWillMount</code></li>
<li>Removing <code>defaultProps</code></li>
</ul>
<p>Overall the process wasn&#x27;t hard, it only had been delayed for a long time. Did we gain anything? I assume automatic batching, but I didn&#x27;t see any noticeable performance improvements.</p>
<h2>Upgrading Jest &amp; React Testing Library</h2>
<p>Lastly, I made a big version bump from <code>v12</code> to <code>v16</code> in React Testing Library. The details are kinda hazy right now, but the biggest changes were dropping the <code>@testing-library/react-hook</code> package, fixing some race conditions, and adding a few <code>waitFor</code> calls.</p>
<p>As for Jest, I took a jab at removing it in favour of Vitest, but I didn&#x27;t have the energy. I just wanted to move on. Instead I decided to upgrade Jest from <code>v27</code> to <code>v29</code>. Nothing much changed, I removed the <code>resize-observer-polyfill</code>, fixed a few issues with the snapshots, and had to install <code>jest-environment-jsdom</code> separately for my troubles.</p>
<p>I finished the refactor happy with the Vite migration, but the rest felt like a chore to be done. I didn&#x27;t feel like I was getting anything out of it, and I was just upgrading stuff for the sake of it.</p>
<h2>Other changes happening in the background</h2>
<p>While I was doing the above, my team continued with:</p>
<ul>
<li>Migrating from Yarn v1 to NPM</li>
<li>Upgrading Storybook</li>
<li>Upgrading Eslint (still don&#x27;t get what flat config offers to us)</li>
<li>Class components to function components refactors</li>
<li><strong>Normal work that actually makes money</strong></li>
</ul>
<p>I was also doing normal work. My favorite feature was building a new charts library (that came with a D3 major bump upgrade as well).</p>
<p>I hadn&#x27;t worked with D3 that intensely before, so I had to find a nice way to tie that with React, and implement features like zooming, comparing charts, toggling outliers, applying themes, locking, maintaining performance for big datasets, and so on.</p>
<p>I also worked on the Rails side of things, mumbling under my breath &quot;OOP isn&#x27;t real, OOP won&#x27;t hurt you&quot;.</p>
<h2>MSW and Apollo Client</h2>
<p>The refactors took a pause, and a good deal of months passed. Recently I went back to the drawing board and picked two leftovers.</p>
<ul>
<li>Upgrading MSW from <code>v0</code> to <code>v1</code></li>
<li>Upgrading Apollo Client from <code>v2</code> to <code>v3</code></li>
</ul>
<p>The MSW upgrade was very tricky and the kind of refactors I hate the most. It&#x27;s package that only serves purpose for development purposes, has breaking changes, and you don&#x27;t really care to upgrade. I wasn&#x27;t even sure if I should do it, but there were a few flakey tests related to MSW&#x27;s setup, that were bothering me. Thankfully, LLMs were a much better help this time around. I also have the <code>v2</code> version to upgrade to, but I&#x27;ll get to that some other time.</p>
<p>After that, <a href="https://www.apollographql.com/docs/react/migrating/apollo-client-3-migration">I focused on the Apollo Client</a>. The biggest issue (and I&#x27;m glad for this change), was that the data became <a href="https://github.com/apollographql/apollo-client/pull/5153">immutable/frozen</a>. There were some runaway mutations happening in the codebase, and I was delighted to catch them.</p>
<h2>Future improvements</h2>
<p>I&#x27;m happy with the state of the codebase now. There are no dependencies that can block us from building features, and we can move forward with confidence. The only real thorn is how to upgrade to React 19.</p>
<ul>
<li>Do we move to TypeScript? I don&#x27;t think so. I want to minimize the front-end logic, and simplify things.</li>
<li>Do we drop prop-types and lose the implicit documentation that comes with it? It feels like going backwards. I don&#x27;t want to have outdated JSDocs instead.</li>
<li>Is the new RSC direction of React something that interests us? No, we use Rails. I&#x27;m excited to try it in Tanstack Start or React Router 7, but in the context of Rails it&#x27;s absolutely useless.</li>
</ul>
<p>This is something that I haven&#x27;t come to terms yet. Not sure what to do.</p>
<p>Other than that, my only other goal is to re-evaluate Vitest and its <a href="https://vitest.dev/guide/browser/">browser-mode</a> feature. I feel we can make tests faster, drop the leftover babel plugins, and simplify our tests without mocking stuff like <code>getComputedStyle</code>.</p>
<h2>Final thoughts</h2>
<p>As I proofread this, I can&#x27;t help to think that a good chunk of this work could have been avoided if we didn&#x27;t combine React with Rails. Webpacker really slowed us down, and the unexpected retirement made things even worse.</p>
<p>I can&#x27;t speak about Turbo, Hotwire or Stimulus, as I haven&#x27;t developed with them. My only experience is with using applications built with them that feel sluggish and slow. I don&#x27;t know if it&#x27;s the framework, or the implementation, but I don&#x27;t like it.</p>
<p>If I were to suggest a different approach, I would pick <a href="https://inertiajs.com/">Inertia</a> and completely drop Apollo as well. Of course Inertia wasn&#x27;t as mature or widely known back then, but if anyone is considering between their in-house solution or shipping two apps, put Inertia on the table as well. Here are the docs for <a href="https://inertia-rails.dev/guide/">Rails</a> and a great introduction by <a href="https://evilmartians.com/chronicles/inertiajs-in-rails-a-new-era-of-effortless-integration">Evil Martians</a>.</p>
<p>As for the never ending stream of dependency upgrades on the Javascript front, I don&#x27;t mind it that much. I usually run <code>npx check-updates -i</code> and handle them every week. It only becomes a problem that compounds over time if not addressed regularly. That said, I still can&#x27;t shake the feeling that we, developers, waste time on unnecessary upgrades like Eslint and Storybook.</p>
<p>I hope this was a fun read. If you find yourself in the same situation regarding React 19 and PropTypes, feel free to reach out. I would love to hear your thoughts on it.</p>
<blockquote>Resources:<ul>
<li><a href="https://world.hey.com/dhh/modern-web-apps-without-javascript-bundling-or-transpiling-a20f2755">Modern web apps without JavaScript bundling or transpiling (DHH)</a></li>
<li><a href="https://github.com/shakacode/shakapacker/blob/main/docs/v6_upgrade.md#webpacker-v600rc6-to-shakapacker">Upgrading from Webpacker v5 to Shakapacker v6</a></li>
<li><a href="https://github.com/rails/jsbundling-rails/blob/main/docs/switch_from_webpacker.md">Switch from Webpacker 5 to jsbundling-rails with webpack</a></li>
<li><a href="https://vite-ruby.netlify.app/guide/introduction.html">Vite Ruby</a></li>
<li><a href="https://dev.to/wojtekmaj/enzyme-is-dead-now-what-ekl">Enzyme is dead, now what</a></li>
<li><a href="https://testing-library.com/docs/react-testing-library/migrate-from-enzyme/">Migrate from Enzyme</a></li>
<li><a href="https://react.dev/blog/2022/03/08/react-18-upgrade-guide">How to Upgrade to React 18</a></li>
<li><a href="https://react.dev/blog/2024/04/25/react-19-upgrade-guide">React 19 Upgrade Guide</a></li>
<li><a href="https://evilmartians.com/chronicles/inertiajs-in-rails-a-new-era-of-effortless-integration">Inertia.js in Rails: a new era of effortless integration</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Using Phoenix with React and Inertia]]></title>
            <link>https://dnlytras.com/blog/phoenix-react-inertia</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/phoenix-react-inertia</guid>
            <pubDate>Sun, 02 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Getting the best of both worlds; Phoenix's productivity, and React's ecosystem]]></description>
            <content:encoded><![CDATA[<blockquote>Contents:<ul>
<li><a href="#introduction">Introduction</a></li>
<li><a href="#why-not-liveview">Why not LiveView</a></li>
<li><a href="#whats-inertia-exactly">What&#x27;s Inertia exactly?</a></li>
<li><a href="#setting-up-inertia">Setting up Inertia</a></li>
<li><a href="#code-splitting">Code splitting</a></li>
<li><a href="#routing">Routing</a></li>
<li><a href="#pages-and-controllers">Pages and controllers</a></li>
<li><a href="#authentication">Authentication</a></li>
<li><a href="#caveats">Caveats</a></li>
<li><a href="#final-thoughts">Final thoughts</a></li>
</ul></blockquote>
<h2>Introduction</h2>
<p>I love React. I&#x27;ve worked with various people and companies, and I&#x27;ve seen its power in action. I&#x27;ve also seen some of the worst things done with it, but fixing those buys &quot;Little Dutch&quot; toys for my daughter, so I have no complaints there.</p>
<p>Jokes aside, React has a massive ecosystem and, most importantly, known unknowns. It&#x27;s considered boring technology now (at least the SPA parts), and the problem you might have has already been tackled and documented somewhere.</p>
<p>My pain point has always been the back-end of things in JavaScript. Next.js is utterly unreliable for reasons I won&#x27;t expand on here, and everything else (Remix/RR, TanStack) doesn&#x27;t have opinions on trivial stuff I don&#x27;t want to configure. On the other hand, frameworks like Rails and Laravel have great solutions but use languages I don&#x27;t enjoy that much. So, Phoenix strikes the perfect balance.</p>
<p>To put these two frameworks together without having to ship two separate applications, I use <a href="https://inertiajs.com/">Inertia</a>.</p>
<h2>Why not LiveView</h2>
<p>While writing this post, I realized that I had to justify why I&#x27;m not using LiveView. If you&#x27;re aware of Phoenix, chances are that you&#x27;ve heard of LiveView as well; there&#x27;s a lot of hype around it.</p>
<p>Ultimately, I wrote too much about it and had to move it to a separate post. You can find my ramblings <a href="on-liveview">here</a>.</p>
<p>In short, LiveView is not a good fit for my apps. For the past few years, I&#x27;ve worked heavily on building charts and data visualizations across different companies. My ability to build these in React, prototype something quickly, and then hand it off to a designer to polish is unmatched.</p>
<p>LiveView is a phenomenal choice if your use case doesn&#x27;t need a lot of interactivity. Use it, and don&#x27;t bother what I think.</p>
<h2>What&#x27;s Inertia exactly?</h2>
<p>The idea behind Inertia is simple.</p>
<p>We want to replace the server-rendered views with JavaScript page components.</p>
<p>On the very first user visit, we return the full page. We also include a JSON payload with the initial data in the HTML. Then, the JavaScript framework of our choosing boots up and takes over.</p>
<p>It will listen for clicks, form submissions, and other events and send XHR requests to the server. Even simple links will be intercepted, and an HXR request will be sent. Assuming these requests to the backend have the <code>X-Inertia</code> header, the server responds with a JSON payload.</p>
<p>This JSON payload contains everything the front-end needs to render the page. It includes the component to render, any shared props, errors from validations, and flash messages.</p>
<p>While the idea is simple, many small details make it great. For example, you can <a href="https://inertiajs.com/partial-reloads">request a subset of the data</a> if you&#x27;re revisiting the same page.</p>
<p>So, Inertia is not exactly a framework but a protocol implemented by different adapters. There are three server-side adapters (Laravel, Rails, and Phoenix), and three client-side adapters (React, Vue, Svelte).</p>
<p>Feel free to check out the <a href="https://inertiajs.com/the-protocol">docs page</a> for more details.</p>
<h2>Setting up Inertia</h2>
<p>Everything I needed was perfectly documented <a href="https://github.com/inertiajs/inertia-phoenix">in the Phoenix adapter</a>.</p>
<p>Surprisingly, I didn&#x27;t have to do much—mostly just copy-pasting the instructions. I made a few modifications in the <code>config.ex</code> and added a plug in <code>router.ex</code>.</p>
<p>After setting everything up, the front-end side of things works exactly as expected. We take advantage of Phoenix&#x27;s asset pipeline (esbuild), and we&#x27;re good to go.</p>
<p>I honestly expected some friction, but there was none. Here&#x27;s what my front-end entry point looks like:</p>
<pre><code class="language-jsx">import {createInertiaApp} from &#x27;@inertiajs/react&#x27;;
import axios from &#x27;axios&#x27;;
import React from &#x27;react&#x27;;
import {createRoot} from &#x27;react-dom/client&#x27;;

axios.defaults.xsrfHeaderName = &#x27;x-csrf-token&#x27;;

createInertiaApp({
  title: (title) =&gt; `${title} - horionos`,
  resolve: async (name) =&gt; {
    return await import(`./pages/${name}.jsx`);
  },
  setup({App, el, props}) {
    createRoot(el).render(&lt;App {...props} /&gt;);
  },
});
</code></pre>
<p>and what the rest of the assets folder looks like:</p>
<pre><code class="language-txt">assets/
├── css/
├── js/
│   ├── components/
│   ├── hooks/
│   ├── layouts/
│   ├── lib/
│   └── pages/
│       ├── auth/
│       ├── 404.jsx
│       ├── 500.jsx
│       ├── home-page.jsx
│       ├── user-preferences-page.jsx
│       ├── user-settings-page.jsx
│       └── app.jsx
├── .prettierrc
├── eslint.config.mjs
├── package-lock.json
├── package.json
├── tailwind.config.js # Luckily, soon to be removed in tailwindcss v4
└── tsconfig.json
</code></pre>
<p>I appreciate the simplicity of this setup.</p>
<p>On the backend, my business logic in <code>lib</code> is untouched, and I only have my controllers to update. For example, I only replaced the <code>render</code> function with <code>render_inertia</code> in my generated <code>user_session_controller</code>.</p>
<pre><code class="language-elixir">defmodule HorionosWeb.UserSessionController do
  use HorionosWeb, :controller

  alias Horionos.Accounts
  alias HorionosWeb.UserAuth

  def new(conn, _params) do
    # render(conn, :new, error_message: nil) # Before
    render_inertia(conn, &quot;auth/login-page&quot;)  # After
  end

  def create(conn, %{&quot;user&quot; =&gt; user_params}) do
    %{&quot;email&quot; =&gt; email, &quot;password&quot; =&gt; password} = user_params

    if user = Accounts.get_user_by_email_and_password(email, password) do
      UserAuth.log_in_user(conn, user, user_params)
    else
      # In order to prevent user enumeration attacks, don&#x27;t disclose whether the email is registered.
      conn
      |&gt; put_flash(:error, &quot;You have entered an invalid email or password.&quot;)
      |&gt; redirect(to: ~p&quot;/users/log_in&quot;)
    end
  end

  def delete(conn, _params) do
    UserAuth.log_out_user(conn)
  end
end
</code></pre>
<h2>Code splitting</h2>
<p>The first challenge I had to solve was optimizing the bundle size. Without any modifications, you download everything at once, even pages you won&#x27;t visit, and that isn&#x27;t ideal.</p>
<p>I took a jab to fix it with <code>React.lazy</code> but realized there&#x27;s nothing React or Inertia specific here. Since Phoenix uses <code>esbuild</code>, this is where I should look.</p>
<p>Enabling code splitting was straightforward, with the caveat that I had to ship the code as ES modules. For the full instructions, here&#x27;s my PR to the <a href="https://github.com/inertiajs/inertia-phoenix/pull/29">Phoenix adapter</a>.</p>
<p>In short, the changes needed are <code>--splitting</code> &amp; <code>--format=esm</code>.</p>
<pre><code class="language-elixir">config :esbuild,
  version: &quot;0.21.5&quot;,
  horionos: [
    args:
      ~w(js/app.jsx --bundle --chunk-names=chunks/[name]-[hash] --splitting --format=esm --target=es2020 --outdir=../priv/static/assets --external:/fonts/* --external:/images/* --loader:.js=jsx --loader:.jsx=jsx --loader:.ts=ts --loader:.tsx=tsx ),
    cd: Path.expand(&quot;../assets&quot;, __DIR__),
    env: %{&quot;NODE_PATH&quot; =&gt; Path.expand(&quot;../deps&quot;, __DIR__)}
  ]
</code></pre>
<p>With that out of the way, we only have to download the code we need.</p>
<blockquote><p><strong>Using ES modules might be a blocker for some.</strong> It&#x27;s a limitation of esbuild, and not Inertia. Laravel for example, uses Vite which in turn uses Rollup for the production builds. I don&#x27;t know what Rails uses this year, so I can&#x27;t comment. (I&#x27;m still salty about a painful Webpacker migration)</p></blockquote>
<h2>Routing</h2>
<p>Here&#x27;s another thing that I love about Inertia. It has no opinions on routing. Our backend is responsible for these decisions.</p>
<p>Of course you still have to make some decisions on the front-end side, especially regarding how to structure your nested layouts (if applicable).</p>
<p>Here&#x27;s what my router looks like:</p>
<pre><code class="language-elixir"># .. imports and plugs

# Guest routes
scope &quot;/&quot;, HorionosWeb do
  pipe_through [:browser, :redirect_if_user_is_authenticated]

  get &quot;/users/register&quot;, UserRegistrationController, :new
  post &quot;/users/register&quot;, UserRegistrationController, :create
  get &quot;/users/log_in&quot;, UserSessionController, :new
  post &quot;/users/log_in&quot;, UserSessionController, :create
  get &quot;/users/reset_password&quot;, UserResetPasswordController, :new
  post &quot;/users/reset_password&quot;, UserResetPasswordController, :create
  get &quot;/users/reset_password/:token&quot;, UserResetPasswordController, :edit
  put &quot;/users/reset_password/:token&quot;, UserResetPasswordController, :update
end

# Authenticated routes
scope &quot;/&quot;, HorionosWeb do
  pipe_through [:browser, :require_authenticated_user]

  get &quot;/&quot;, HomeController, :home

  get &quot;/users/preferences&quot;, UserPreferencesController, :edit
  put &quot;/users/preferences&quot;, UserPreferencesController, :update

  get &quot;/users/settings&quot;, UserSettingsController, :edit
  put &quot;/users/settings&quot;, UserSettingsController, :update
  get &quot;/users/settings/confirm_email/:token&quot;, UserSettingsController, :confirm_email
end

# Mixed routes
scope &quot;/&quot;, HorionosWeb do
  pipe_through [:browser]

  delete &quot;/users/log_out&quot;, UserSessionController, :delete
  get &quot;/users/confirm&quot;, UserConfirmationController, :new
  post &quot;/users/confirm&quot;, UserConfirmationController, :create
  get &quot;/users/confirm/:token&quot;, UserConfirmationController, :edit
  post &quot;/users/confirm/:token&quot;, UserConfirmationController, :update
end

# Catch all route
scope &quot;/&quot;, HorionosWeb do
  pipe_through [:browser]

  get &quot;/*path&quot;, ErrorController, :not_found
end
</code></pre>
<p>I love how clean and simple this is. Using <code>live_sessions</code> made this so hard for me to follow and reason about.</p>
<h2>Pages and controllers</h2>
<p>Now, let&#x27;s see how we render a page in more detail. Let&#x27;s take the <code>users/preferences</code> path and check out the controller:</p>
<pre><code class="language-elixir">defmodule HorionosWeb.UserPreferencesController do
  use HorionosWeb, :controller

  alias Horionos.Accounts

  def edit(conn, _params) do
    render_inertia(conn, &quot;user-preferences-page&quot;)
  end

  def update(conn, params) do
    user = conn.assigns.current_user

    case Accounts.update_user_preferences(user, params) do
      {:ok, _user} -&gt;
        conn
        |&gt; put_flash(:info, &quot;Preferences updated successfully&quot;)
        |&gt; redirect(to: ~p&quot;/users/preferences&quot;)

      {:error, changeset} -&gt;
        conn
        |&gt; assign_errors(changeset)
        |&gt; redirect(to: ~p&quot;/users/preferences&quot;)
    end
  end
end
</code></pre>
<p>There&#x27;s nothing special here. Instead of rendering a template, we give the name of the JavaScript file to render.</p>
<p>You might notice the <code>assign_errors</code> function. It&#x27;s a <a href="https://github.com/inertiajs/inertia-phoenix?tab=readme-ov-file#validations">helper</a> provided by the Phoenix adapter, which converts changeset errors to a client-side friendly format.</p>
<p>Now here&#x27;s my <code>user-preferences-page.jsx</code>, which gets rendered:</p>
<pre><code class="language-jsx">import ComputerDesktopIcon from &#x27;@heroicons/react/20/solid/ComputerDesktopIcon&#x27;;
import MoonIcon from &#x27;@heroicons/react/20/solid/MoonIcon&#x27;;
import SunIcon from &#x27;@heroicons/react/20/solid/SunIcon&#x27;;
import {Head} from &#x27;@inertiajs/react&#x27;;
import {useForm} from &#x27;@inertiajs/react&#x27;;
import React from &#x27;react&#x27;;

import {Button} from &#x27;~/components/button&#x27;;
import {Divider} from &#x27;~/components/divider&#x27;;
import {ErrorBoundary} from &#x27;~/components/error-boundary&#x27;;
import {ErrorMessage, Field, Label} from &#x27;~/components/fieldset&#x27;;
import {Heading, Subheading} from &#x27;~/components/heading&#x27;;
import {Listbox, ListboxLabel, ListboxOption} from &#x27;~/components/listbox&#x27;;
import {Text} from &#x27;~/components/text&#x27;;
import {useCurrentUser} from &#x27;~/hooks/use-current-user&#x27;;
import {MainLayout} from &#x27;~/layouts/main&#x27;;

const THEMES = [
  {value: &#x27;system&#x27;, label: &#x27;System&#x27;, icon: ComputerDesktopIcon},
  {value: &#x27;light&#x27;, label: &#x27;Light&#x27;, icon: SunIcon},
  {value: &#x27;dark&#x27;, label: &#x27;Dark&#x27;, icon: MoonIcon},
];

const DATE_FORMATS = [
  {value: &#x27;YYYY-MM-DD&#x27;, label: &#x27;YYYY-MM-DD&#x27;},
  {value: &#x27;MM-DD-YYYY&#x27;, label: &#x27;MM-DD-YYYY&#x27;},
  {value: &#x27;DD-MM-YYYY&#x27;, label: &#x27;DD-MM-YYYY&#x27;},
];

function Page() {
  const user = useCurrentUser();
  const preferencesForm = useForm({
    theme: user.preferences.theme,
    date_format: user.preferences.date_format,
  });

  function onPreferencesChangeSubmit(event) {
    event.preventDefault();

    preferencesForm.put(&#x27;/users/preferences&#x27;);
  }

  return (
    &lt;MainLayout&gt;
      &lt;Head title=&quot;Preferences&quot; /&gt;
      &lt;div className=&quot;mx-auto max-w-4xl&quot;&gt;
        &lt;Heading&gt;Preferences&lt;/Heading&gt;
        &lt;Divider className=&quot;my-10 mt-6&quot; /&gt;
        &lt;form onSubmit={onPreferencesChangeSubmit}&gt;
          &lt;section className=&quot;grid gap-x-8 gap-y-6 sm:grid-cols-2&quot;&gt;
            &lt;div className=&quot;space-y-1&quot;&gt;
              &lt;Subheading&gt;Theme&lt;/Subheading&gt;
              &lt;Text className=&quot;text-balance&quot;&gt;
                Choose a theme for the application. The theme will be applied to
                all pages.
              &lt;/Text&gt;
            &lt;/div&gt;
            &lt;Field&gt;
              &lt;Label&gt;Theme&lt;/Label&gt;
              &lt;Listbox
                name=&quot;theme&quot;
                value={preferencesForm.data.theme}
                onChange={(value) =&gt; preferencesForm.setData(&#x27;theme&#x27;, value)}
                disabled={preferencesForm.processing}
              &gt;
                {THEMES.map(({value, label, icon: Icon}) =&gt; (
                  &lt;ListboxOption key={value} value={value}&gt;
                    &lt;ListboxLabel className=&quot;flex gap-2 items-center&quot;&gt;
                      &lt;Icon className=&quot;size-4&quot; /&gt;
                      &lt;span&gt;{label}&lt;/span&gt;
                    &lt;/ListboxLabel&gt;
                  &lt;/ListboxOption&gt;
                ))}
              &lt;/Listbox&gt;
              {preferencesForm.errors.theme &amp;&amp; (
                &lt;ErrorMessage&gt;{preferencesForm.errors.theme}&lt;/ErrorMessage&gt;
              )}
            &lt;/Field&gt;
          &lt;/section&gt;
          &lt;Divider className=&quot;my-10&quot; soft /&gt;
          &lt;section className=&quot;grid gap-x-8 gap-y-6 sm:grid-cols-2&quot;&gt;
            &lt;div className=&quot;space-y-1&quot;&gt;
              &lt;Subheading&gt;Date format&lt;/Subheading&gt;
              &lt;Text className=&quot;text-balance&quot;&gt;
                Choose how you want dates to be displayed.
              &lt;/Text&gt;
            &lt;/div&gt;
            &lt;Field&gt;
              &lt;Label&gt;Date format&lt;/Label&gt;
              &lt;Listbox
                name=&quot;date_format&quot;
                value={preferencesForm.data.date_format}
                onChange={(value) =&gt;
                  preferencesForm.setData(&#x27;date_format&#x27;, value)
                }
                disabled={preferencesForm.processing}
              &gt;
                {DATE_FORMATS.map(({value, label}) =&gt; (
                  &lt;ListboxOption key={value} value={value}&gt;
                    &lt;ListboxLabel&gt;{label}&lt;/ListboxLabel&gt;
                  &lt;/ListboxOption&gt;
                ))}
              &lt;/Listbox&gt;
              {preferencesForm.errors.date_format &amp;&amp; (
                &lt;ErrorMessage&gt;
                  {preferencesForm.errors.date_format}
                &lt;/ErrorMessage&gt;
              )}
            &lt;/Field&gt;
          &lt;/section&gt;
          &lt;Divider className=&quot;my-10&quot; soft /&gt;
          &lt;div className=&quot;flex justify-end gap-4&quot;&gt;
            &lt;Button
              type=&quot;button&quot;
              plain
              onClick={() =&gt; {
                preferencesForm.reset();
              }}
            &gt;
              Reset
            &lt;/Button&gt;
            &lt;Button type=&quot;submit&quot; disabled={preferencesForm.processing}&gt;
              Save changes
            &lt;/Button&gt;
          &lt;/div&gt;
        &lt;/form&gt;
      &lt;/div&gt;
    &lt;/MainLayout&gt;
  );
}

export default function UserPreferencesPage(props) {
  return (
    &lt;ErrorBoundary&gt;
      &lt;Page {...props} /&gt;
    &lt;/ErrorBoundary&gt;
  );
}
</code></pre>
<p>I&#x27;m bringing my favourite UI library, and I iterate without caring about the backend. I love it. I have complete front-end control.</p>
<p>If I want to tweak the backend response, I update my <code>.ex</code> files without opening a second repo. Everything works in harmony.</p>
<p>Now, you&#x27;ll notice the <code>useForm</code> helper. This is the glue code that Inertia provides to handle form submissions.</p>
<p>You might also notice that I use <code>snake_case</code> sometimes. You have the option to <code>camelize</code> the props in the Inertia adapter config, but somehow mixing them doesn&#x27;t annoy me 🤔</p>
<blockquote><p>I won&#x27;t lie. I miss the way I handle forms with Remix/RR. I don&#x27;t enjoy having to <code>event.preventDefault</code> my forms or not using <code>FormData</code>. But I&#x27;m willing to make that trade-off for the productivity I get here. You can read more about the form helpers from Inertia <a href="https://inertiajs.com/forms">here</a>.</p></blockquote>
<h2>Authentication</h2>
<p>No opinions here what so ever. I use the <code>phx.gen.auth</code> generator, and I&#x27;m good to go.</p>
<p>If the user is logged in, I just pass their details as <a href="https://inertiajs.com/shared-data">shared data</a>. Here&#x27;s how.</p>
<p>First, I pick the fields I want to expose from the User schema:</p>
<pre><code class="language-elixir">defmodule Horionos.Accounts.Schemas.User do
# ...
  @derive {Jason.Encoder, only: [:id, :full_name, :email, :confirmed_at, :preferences]}
# ...
end
</code></pre>
<p>Then, I update my <code>require_authenticated_user</code> plug to include the user:</p>
<pre><code class="language-elixir">def require_authenticated_user(conn, _opts) do
  user = conn.assigns[:current_user]

  if user do
    conn
    |&gt; assign_prop(:current_user, user)
  else
    conn
    |&gt; put_flash(:error, &quot;You must log in to access this page.&quot;)
    |&gt; maybe_store_return_to()
    |&gt; redirect(to: ~p&quot;/users/log_in&quot;)
    |&gt; halt()
  end
end
</code></pre>
<p>And the front-end takes over with a simple re-usable hook.</p>
<pre><code class="language-tsx">import {PageProps} from &#x27;@inertiajs/core&#x27;;
import {usePage} from &#x27;@inertiajs/react&#x27;;

interface UserPreferences {
  theme: &#x27;light&#x27; | &#x27;dark&#x27; | &#x27;system&#x27;;
  date_format: &#x27;YYYY-MM-DD&#x27; | &#x27;DD/MM/YYYY&#x27; | &#x27;MM/DD/YYYY&#x27;;
}

interface CurrentUser {
  email: string;
  full_name?: string;
  confirmed_at: string | null;
  preferences: UserPreferences;
}

export function useCurrentUser() {
  const {props} = usePage&lt;PageProps &amp; {current_user: CurrentUser}&gt;();
  return props.current_user;
}

export type {CurrentUser, UserPreferences};
</code></pre>
<blockquote><p>I haven&#x27;t decided if using TypeScript makes sense here.</p><p>I just wanted to see how easy it was to set up. Turns out it&#x27;s trivial, and you only need to add the loader to your esbuild config.</p></blockquote>
<h2>Caveats</h2>
<p>There are a few caveats we should be aware of.</p>
<ol>
<li><strong>The Phoenix adapter is the biggest point of failure</strong>. We depend on the maintainers to keep it up to date with the latest Inertia changes. Inertia isn&#x27;t that widely used in the Phoenix space (compared to Laravel which has first-class support), so it&#x27;s good to keep that in mind.</li>
<li><strong>Phoenix bets heavily on LiveView</strong>. If you&#x27;re picking Phoenix expecting some feature parity with frameworks like Laravel, you will be disappointed. You&#x27;ll see releases that may not interest you.</li>
<li>Now, this isn&#x27;t related to Inertia, but <strong>React seems to be eyeing the server-side rendering space</strong>. I don&#x27;t see that much love for SPA&#x27;s anymore, and the React docs don&#x27;t even mention Vite, the best way to run SPAs in production. If you want to use React, like I do, maybe we&#x27;re not the target audience anymore.</li>
</ol>
<p>I&#x27;m okay with React exporting a different space, and focusing on RSC. I&#x27;m also okay with Phoenix polishing its LiveView offering. I&#x27;m only keeping an eye on the Phoenix adapter, but there&#x27;s always the option to fork it if things go south.</p>
<h2>Final thoughts</h2>
<p>I tried hard to make full-stack JS work, but I keep working on the same (solved) problems that Javascript frameworks don&#x27;t care about. Examples include:</p>
<ul>
<li>emails, trapping them during development, and testing</li>
<li>async jobs</li>
<li>file uploads</li>
<li>orm or query builder, migrations</li>
<li>multi-tenancy, authorization, roles</li>
<li>impersonation</li>
<li>logs</li>
<li>tracing</li>
<li>.. more</li>
</ul>
<p>I&#x27;m doing all of these because I&#x27;m silly and like to write JSX. It is probably not a reasonable tradeoff.</p>
<p>With Inertia, I can have my cake and eat it too. I bring the backend framework I like, and I let React take over my front-end. One toolchain, one repo, one deployment pipeline.</p>
<p>Being able to combine Elixir and React is a game changer. I&#x27;ve experienced some years of burnout and not enjoying coding professionally, and this has been a breath of fresh air.</p>
<p>So, to conclude, I&#x27;m very happy with this setup. If you love LiveView and it gives you that newfound enthusiasm for coding, I&#x27;m so happy for you. Likewise, for the Rails renaissance.</p>
<p>While this post is React-centric, nothing stops you from using Vue or Svelte with Phoenix instead. I hope you try Inertia and see if it fits your workflow.</p>
<blockquote>Resources:<ul>
<li><a href="https://inertiajs.com/">Inertia.js docs</a></li>
<li><a href="https://github.com/inertiajs/inertia-phoenix">Inertia.js Phoenix adapter (GitHub.com)</a></li>
<li><a href="https://www.youtube.com/watch?v=uyfyFRvng3c">Simplify React and Phoenix using Inertia JS (YouTube.com)</a></li>
<li><a href="https://evilmartians.com/chronicles/inertiajs-in-rails-a-new-era-of-effortless-integration">Inertia.js in Rails (evilmartians.com)</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[My experience with Phoenix LiveView]]></title>
            <link>https://dnlytras.com/blog/on-liveview</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/on-liveview</guid>
            <pubDate>Tue, 21 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Some takeaways from my time with Phoenix LiveView]]></description>
            <content:encoded><![CDATA[<blockquote>Contents:<ul>
<li><a href="#javascript-interop">JavaScript interop</a></li>
<li><a href="#components">Components</a></li>
<li><a href="#code-organization">Code organization</a></li>
<li><a href="#live_session-and-on_mount">live_session and on_mount</a></li>
<li><a href="#you-still-need-controllers">You still need controllers</a></li>
<li><a href="#genserver-centric-api">GenServer centric API</a></li>
<li><a href="#final-thoughts">Final thoughts</a></li>
</ul></blockquote>
<p>I&#x27;ve been experimenting with Phoenix LiveView for a while now. I have a net-positive experience but can&#x27;t bring myself to love it or use it for my production projects.</p>
<p>It&#x27;s an exciting technology where things work, but they don&#x27;t quite feel right.</p>
<h2>JavaScript interop</h2>
<p>The biggest issue for me is the interop with JavaScript.</p>
<p>At some point, no matter what your initial expectations were, you need to bridge this gap and use JavaScript. That&#x27;s where things get hairy.</p>
<p>Initially, you try to make things work with <a href="https://hexdocs.pm/phoenix_live_view/1.0.2/js-interop.html#client-hooks-via-phx-hook">JS hooks</a>, but this quickly becomes difficult to work with. Here&#x27;s <a href="https://github.com/fly-apps/live_beats/blob/master/assets/js/app.js">an example from the LiveBeats repo</a>. The LiveView markup can very easily get out of sync with JavaScript. Maintainability is also an issue.</p>
<p>Some people pull Alpine, or libraries like <a href="https://github.com/woutdp/live_svelte">LiveSvelte</a> to help with this. But this seems like a weird choice to me. You&#x27;re niching down your stack even further, and the initial quest to reduce front-end complexity ends up increasing it. The no-js-needed premise is gone, and along with it, you&#x27;re left with a weird hybrid stack.</p>
<p>You get a bit of styling here, a little bit of front-end logic there, and a little bit of front-end state management over there. All of this, mushed together in the same codebase with the backend. It works, but stepping away for some time, and returning for a new feature, doesn&#x27;t feel right. You have to reorient yourself, and remember how all this plumbing works.</p>
<p>JavaScript interop should be a first-class citizen and not an afterthought.</p>
<h2>Components</h2>
<p>The ecosystem is obviously small. That&#x27;s, of course, expected when you&#x27;re picking Elixir.</p>
<p>But now, when your front-end uses LiveView and HEEx, you&#x27;re closing the door to a lot of good stuff out there.</p>
<p>Authoring components in HEEx can be very awkward. Here&#x27;s part of the input component from the Phoenix skeleton.</p>
<p>It&#x27;s a god-component that accepts a lot of options, and then does pattern matching to render the appropriate input.</p>
<pre><code class="language-elixir">attr :id, :any, default: nil
attr :name, :any
attr :label, :string, default: nil
attr :value, :any

attr :type, :string,
  default: &quot;text&quot;,
  values: ~w(checkbox color date datetime-local email file month number password
             range search select tel text textarea time url week)

attr :field, Phoenix.HTML.FormField,
  doc: &quot;a form field struct retrieved from the form, for example: @form[:email]&quot;

attr :errors, :list, default: []
attr :checked, :boolean, doc: &quot;the checked flag for checkbox inputs&quot;
attr :prompt, :string, default: nil, doc: &quot;the prompt for select inputs&quot;
attr :options, :list, doc: &quot;the options to pass to Phoenix.HTML.Form.options_for_select/2&quot;
attr :multiple, :boolean, default: false, doc: &quot;the multiple flag for select inputs&quot;

attr :rest, :global,
  include: ~w(accept autocomplete capture cols disabled form list max maxlength min minlength
              multiple pattern placeholder readonly required rows size step)

def input(%{field: %Phoenix.HTML.FormField{} = field} = assigns) do
  errors = if Phoenix.Component.used_input?(field), do: field.errors, else: []

  assigns
  |&gt; assign(field: nil, id: assigns.id || field.id)
  |&gt; assign(:errors, Enum.map(errors, &amp;translate_error(&amp;1)))
  |&gt; assign_new(:name, fn -&gt; if assigns.multiple, do: field.name &lt;&gt; &quot;[]&quot;, else: field.name end)
  |&gt; assign_new(:value, fn -&gt; field.value end)
  |&gt; input()
end

def input(%{type: &quot;checkbox&quot;} = assigns) do
  assigns =
    assign_new(assigns, :checked, fn -&gt;
      Phoenix.HTML.Form.normalize_value(&quot;checkbox&quot;, assigns[:value])
    end)

  ~H&quot;&quot;&quot;
  &lt;div&gt;
    &lt;label class=&quot;flex items-center gap-4 text-sm leading-6 text-zinc-600&quot;&gt;
      &lt;input type=&quot;hidden&quot; name={@name} value=&quot;false&quot; disabled={@rest[:disabled]} /&gt;
      &lt;input
        type=&quot;checkbox&quot;
        id={@id}
        name={@name}
        value=&quot;true&quot;
        checked={@checked}
        class=&quot;rounded-sm border-zinc-300 text-zinc-900 focus:ring-0&quot;
        {@rest}
      /&gt;
      {@label}
    &lt;/label&gt;
    &lt;.error :for={msg &lt;- @errors}&gt;{msg}&lt;/.error&gt;
  &lt;/div&gt;
  &quot;&quot;&quot;
end

def input(%{type: &quot;select&quot;} = assigns) do
end

def input(%{type: &quot;textarea&quot;} = assigns) do
end

# etc..
</code></pre>
<p>From an Elixir POV, it&#x27;s fantastic, I love that we can do this. But from a front-end perspective, it doesn&#x27;t excite me or give me confidence. When switching context from Remix and Next.js projects, I feel slower and less productive. I can&#x27;t do quick prototypes.</p>
<p>There are some third-party component libraries (mostly paid), but they are not as polished as the ones you get with React or Vue.</p>
<h2>Code organization</h2>
<p>No back-end framework, to my knowledge, provides a good solution for organizing your front-end code, and Phoenix is no exception.</p>
<p>When you start a new Phoenix project, you get all your re-usable components in a <code>core-components.ex</code> file. How you organize, grow, and maintain this file is up to you.</p>
<p>I agree that this is something unique to each application, but I can&#x27;t help but feel that the lack of a good solution is a missed opportunity.</p>
<p>When you add LiveComponents to the mix, things get more annoying. Moving stuff around also has the negative of having to update the module name if you want to follow Elixir&#x27;s naming conventions.</p>
<h2>live_session and on_mount</h2>
<p>Let&#x27;s move on. <code>live_session</code> lets you share the state with multiple LiveViews. So let&#x27;s say we have this in our router:</p>
<pre><code class="language-elixir">scope &quot;/&quot;, MyAppWeb do
  pipe_through [:browser, :require_authenticated_user]

  live_session :authenticated, on_mount: {MyAppWeb.UserAuth, :ensure_authenticated} do
    live &quot;/dashboard&quot;, DashboardLive
    live &quot;/settings&quot;, SettingsLive
  end
end
</code></pre>
<p>Before we render the <code>DashboardLive</code> or <code>SettingsLive</code>, we double-check that the user is authenticated in the <code>ensure_authenticated</code> function.</p>
<pre><code class="language-elixir">def on_mount(:ensure_authenticated, _params, session, socket) do
  if socket.assigns[:current_user] do
    {:cont, socket}
  else
    {:halt, redirect(socket, to: ~p&quot;/login&quot;)}
  end
end
</code></pre>
<p>This looks good on the surface but gets annoying when you have to do multiple checks.</p>
<pre><code class="language-elixir">scope &quot;/&quot;, MyAppWeb do
  pipe_through [
    :browser,
    :require_authenticated_user,
    :require_email_verified,
    :require_unlocked_account,
    :require_organization
  ]

  # Controllers need to be used for some things
  post &quot;/organization/select&quot;, OrganizationSessionController, :update
  post &quot;/users/clear_sessions&quot;, UserSessionController, :delete_other_sessions

  live_session :authenticated_with_organization,
    on_mount: [
      {UserAuthLive, :ensure_authenticated},
      {UserAuthLive, :ensure_current_organization},
      {UserAuthLive, :ensure_email_verified},
      {UserAuthLive, :redirect_if_locked},
      {LiveHelpers, :default}
    ] do
    live &quot;/&quot;, DashboardLive, :home

    # User settings
    live &quot;/users/settings&quot;, UserSettings.IndexLive, :edit
    live &quot;/users/settings/security&quot;, UserSettings.SecurityLive, :security
    live &quot;/users/settings/confirm_email/:token&quot;, UserSettings.IndexLive, :confirm_email

    # Organization management
    live &quot;/organization&quot;, Organization.IndexLive, :index
    live &quot;/organization/invitations&quot;, Organization.InvitationsLive, :index
  end
end
</code></pre>
<p>This becomes a headache for me. I understand why you need to do things twice, but it feels awkward, and you can easily mess it up.</p>
<p>In fact, I&#x27;m certain that there might be a better way to do this, but if I have to think about it that much, then it&#x27;s not a good API.</p>
<h2>You still need controllers</h2>
<p>When I was exploring LiveView, I was hoping that I could drop the controllers and go full-on with this new paradigm, similar to React Server Components.</p>
<p>Essentially, have a single file that does everything for that page.</p>
<p>Unfortunately, LiveView can&#x27;t do it; you still need controllers. For example, you need to <a href="https://hexdocs.pm/phoenix_live_view/form-bindings.html#submitting-the-form-action-over-http">POST to a controller to set the session</a>.</p>
<p>Here&#x27;s an example from the authentication generator. You create the user in LiveView, but for the session, you need to hit a controller.</p>
<pre><code class="language-elixir">defmodule MyAppWeb.UserRegistrationLive do
  use MyAppWeb, :live_view

  alias MyApp.Accounts
  alias MyApp.Accounts.User

  def render(assigns) do
    ~H&quot;&quot;&quot;
    &lt;div class=&quot;mx-auto max-w-sm&quot;&gt;
      &lt;.header class=&quot;text-center&quot;&gt;
        Register for an account
        &lt;:subtitle&gt;
          Already registered?
          &lt;.link navigate={~p&quot;/users/log_in&quot;} class=&quot;font-semibold text-brand hover:underline&quot;&gt;
            Log in
          &lt;/.link&gt;
          to your account now.
        &lt;/:subtitle&gt;
      &lt;/.header&gt;

      &lt;.simple_form
        for={@form}
        id=&quot;registration_form&quot;
        phx-submit=&quot;save&quot;
        phx-change=&quot;validate&quot;
        phx-trigger-action={@trigger_submit}
        # Here ----------v
        action={~p&quot;/users/log_in?_action=registered&quot;}
        method=&quot;post&quot;
      &gt;
        &lt;.error :if={@check_errors}&gt;
          Oops, something went wrong! Please check the errors below.
        &lt;/.error&gt;

        &lt;.input field={@form[:email]} type=&quot;email&quot; label=&quot;Email&quot; required /&gt;
        &lt;.input field={@form[:password]} type=&quot;password&quot; label=&quot;Password&quot; required /&gt;

        &lt;:actions&gt;
          &lt;.button phx-disable-with=&quot;Creating account...&quot; class=&quot;w-full&quot;&gt;Create an account&lt;/.button&gt;
        &lt;/:actions&gt;
      &lt;/.simple_form&gt;
    &lt;/div&gt;
    &quot;&quot;&quot;
  end

  def mount(_params, _session, socket) do
  # omitted
  end

  def handle_event(&quot;save&quot;, %{&quot;user&quot; =&gt; user_params}, socket) do
    # And here ----------v
    case Accounts.register_user(user_params) do
      {:ok, user} -&gt;
        {:ok, _} =
          Accounts.deliver_user_confirmation_instructions(
            user,
            &amp;url(~p&quot;/users/confirm/#{&amp;1}&quot;)
          )

        changeset = Accounts.change_user_registration(user)
        {:noreply, socket |&gt; assign(trigger_submit: true) |&gt; assign_form(changeset)}

      {:error, %Ecto.Changeset{} = changeset} -&gt;
        {:noreply, socket |&gt; assign(check_errors: true) |&gt; assign_form(changeset)}
    end
  end

  def handle_event(&quot;validate&quot;, %{&quot;user&quot; =&gt; user_params}, socket) do
    # omitted
  end

  defp assign_form(socket, %Ecto.Changeset{} = changeset) do
    # omitted
  end
end
</code></pre>
<p>I understand the technical limitations, but it&#x27;s a bummer.</p>
<p>I was hoping for a more holistic approach. It feels weird that I do a mutation in LiveView and then a side effect in the controller.</p>
<p>No matter how you slice it, it&#x27;s suboptimal.</p>
<h2>GenServer centric API</h2>
<p>Finally, the API is very GenServer centric. I believe this is intended, as the maintainers want to show you that it&#x27;s nothing more than a GenServer under the hood.</p>
<p>In my opinion, this is a mistake, and it hurts the adoption. Imagine trying to convince your front-end team to evaluate LiveView, and you show them this:</p>
<pre><code class="language-elixir">def handle_event(&quot;increment&quot;, _params, socket) do
  {:noreply, assign(socket, count: socket.assigns.count + 1)}
end
</code></pre>
<p>What is this <code>:noreply</code>? Is this a side-effect?</p>
<p>Say we generate some boilerplate with <code>mix phx.gen.live Things Thing things name:string description:text</code> and look at the <code>form_component.ex</code>.</p>
<p>On version <code>1.7.18</code> I get the following:</p>
<pre><code class="language-elixir">@impl true
def handle_event(&quot;validate&quot;, %{&quot;thing&quot; =&gt; thing_params}, socket) do
end

def handle_event(&quot;save&quot;, %{&quot;thing&quot; =&gt; thing_params}, socket) do
end

defp save_thing(socket, :edit, thing_params) do
end

defp save_thing(socket, :new, thing_params) do
end

defp notify_parent(msg), do: send(self(), {__MODULE__, msg})
</code></pre>
<p>What is this <code>send(self(), {__MODULE__, msg})</code> and why is <a href="https://github.com/phoenixframework/phoenix/blob/4ebefb9d1f710c576f08c517f5852498dd9b935c/priv/templates/phx.gen.live/form.ex#L55-L64">only one</a> <code>handle_event</code> having <code>@impl: true</code>? Why do we need this ceremony for a simple form?</p>
<p>If I hadn&#x27;t read <a href="https://www.manning.com/books/elixir-in-action-third-edition">Elixir in Action</a> before picking up Phoenix, I would have quit in the first 10 minutes. There&#x27;s no &quot;Aha!&quot; moment. The API should be simpler, there&#x27;s no need for the plumbing to be visible.</p>
<h2>Final thoughts</h2>
<p>LiveView is a great tool for internal applications or applications without lots of interactions. You can do cool things and ship them quickly. It&#x27;s honestly amazing that our solutions are not just React, Vue, or Angular. If it doesn&#x27;t suit my needs, there are 100 developers who might adore it.</p>
<p>But it&#x27;s not the silver bullet some people make it out to be.</p>
<p>I wanted to write this post to share my thoughts, and balance the discussion, as most of the posts I&#x27;ve read don&#x27;t mention some of the issues I&#x27;ve faced.</p>
<hr/>
<p>For the past weeks, I&#x27;ve been toying with <a href="https://inertiajs.com/">Inertia</a> &amp; its <a href="https://github.com/inertiajs/inertia-phoenix">Phoenix adapter</a>.</p>
<p>I have to say, it&#x27;s the most fun I&#x27;ve had coding for a while. I get the things I love about Elixir, and then throw in some React. If you&#x27;re looking for a way to get the best of both worlds, I highly recommend it. Feel free to reach out if you have any questions, I&#x27;ll write about it soon as well.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Separate Session Tokens in Phoenix]]></title>
            <link>https://dnlytras.com/blog/separate-session-tokens-phoenix</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/separate-session-tokens-phoenix</guid>
            <pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Update the generated phx.gen.auth code to use a separate session token table]]></description>
            <content:encoded><![CDATA[<p>One of the pet-peeves that I have with the official <code>phx.gen.auth</code> generator is that it merges all the user tokens into a single table.</p>
<pre><code class="language-elixir">create table(:users_tokens) do
  add :user_id, references(:users, on_delete: :delete_all), null: false
  add :token, :binary, null: false
  add :context, :string, null: false
  add :sent_to, :string

  timestamps(type: :utc_datetime, updated_at: false)
end
</code></pre>
<p>I&#x27;m not a big fan of this, because I consider it to be a leaky abstraction.</p>
<p>Why would my session tokens need <code>sent_to</code> ? This is for the email-change flow.</p>
<p>What if I want to add extra metadata, like the user-agent, or the IP address? I shouldn&#x27;t do that in this table.
My password reset token flow doesn&#x27;t care about that.</p>
<p>I believe it&#x27;s best to make this change early, before we get to unicorn status, and not have to deal with the migration when it&#x27;s painful. In this post I&#x27;ll focus on the session tokens, but the same principle applies to all others.</p>
<h2>Step 1: Generate the migration</h2>
<p>We keep the same structure, but without <code>context</code> &amp; <code>sent_to</code>. Let&#x27;s also add a unique index on the token.
There are no surprises here.</p>
<pre><code class="language-elixir">defmodule MyApp.Repo.Migrations.CreateSessionTokensTable do
  use Ecto.Migration

  def change do
    create table(:session_tokens) do
      add :user_id, references(:users, on_delete: :delete_all), null: false
      add :token, :binary, null: false

      timestamps(type: :utc_datetime, updated_at: false)
    end

    create index(:session_tokens, [:user_id])
    create unique_index(:session_tokens, [:token])
  end
end
</code></pre>
<h2>Optional Step: Centralize the token generation</h2>
<p>Well, here&#x27;s a second pet-peeve. I don&#x27;t enjoy having to call <code>:crypto.strong_rand_bytes/1</code> everywhere in my code.
It&#x27;s a bit of a code smell. The <code>hash_algorithm</code> and <code>rand_size</code> are also things that are not going to change module to module.</p>
<p>So I like to centralize this in a helper module.</p>
<pre><code class="language-elixir">defmodule MyApp.Helpers.TokenService do
  alias MyApp.Constants

  @hash_algorithm Constants.hash_algorithm()
  @rand_size Constants.rand_size()

  @spec generate_token() :: binary()
  def generate_token do
    :crypto.strong_rand_bytes(@rand_size)
  end

  @spec hash(binary()) :: binary()
  def hash(token) do
    :crypto.hash(@hash_algorithm, token)
  end

  @spec generate() :: {binary(), binary()}
  def generate do
    raw_token = generate_token()
    {raw_token, hash(raw_token)}
  end

  @spec encode(binary()) :: binary()
  def encode(token) do
    Base.url_encode64(token, padding: false)
  end

  @spec decode(binary()) :: {:ok, binary()} | :error
  def decode(encoded_token) do
    Base.url_decode64(encoded_token, padding: false)
  end
end
</code></pre>
<h2>Step 2: Create the schema</h2>
<p>OK, back to the juicy stuff. Let&#x27;s create the <code>SessionToken</code> schema. There&#x27;s only a single <code>changeset</code> method that generates the token, and prepares the changeset.</p>
<pre><code class="language-elixir">defmodule MyApp.Accounts.SessionToken do
  use Ecto.Schema

  import Ecto.Changeset

  alias MyApp.Accounts.User
  alias MyApp.Helpers.TokenService

  schema &quot;session_tokens&quot; do
    field :token, :binary

    belongs_to :user, User

    timestamps(type: :utc_datetime, updated_at: false)
  end

  def changeset(user) do
    token = TokenService.generate_token()

    attrs =
      %{
        token: token,
        user_id: user.id
      }

    changeset =
      %__MODULE__{}
      |&gt; cast(attrs, [:token, :user_id])
      |&gt; validate_required([:token, :user_id])
      |&gt; foreign_key_constraint(:user_id)

    {token, changeset}
  end
end
</code></pre>
<h2>Step 3: Update the <code>Accounts</code> context</h2>
<p>Finally, let&#x27;s make small adjustments to the <code>Accounts</code> context. I&#x27;m also going to add a few new ones, <code>revoke_other_sessions/2</code> and <code>list_sessions/2</code>.</p>
<pre><code class="language-elixir">## Session

@doc &quot;&quot;&quot;
Generates a session token.
&quot;&quot;&quot;
def generate_user_session_token(user) do
  {token, session_token_changeset} = SessionToken.changeset(user)
  Repo.insert!(session_token_changeset)
  token
end

@doc &quot;&quot;&quot;
Gets the user with the given signed token.
&quot;&quot;&quot;
def get_user_by_session_token(token) do
  days = Constants.session_validity_in_days()

  SessionToken
  |&gt; where([st], st.token == ^token)
  |&gt; join(:inner, [st], u in assoc(st, :user))
  |&gt; where([st], st.inserted_at &gt; ago(^days, &quot;day&quot;))
  |&gt; select([st, u], u)
  |&gt; Repo.one()
end

@doc &quot;&quot;&quot;
Deletes the signed token with the given context.
&quot;&quot;&quot;
def delete_user_session_token(token) do
  Repo.delete_all(from st in SessionToken, where: st.token == ^token)
  :ok
end

@doc &quot;&quot;&quot;
Revokes all session tokens for a user except the current one.

## Parameters
  - user: User struct
  - current_token: Token to keep active

## Returns
  - {number_of_revoked_sessions, nil}
&quot;&quot;&quot;
def revoke_other_sessions(user, current_token) do
  Repo.delete_all(
    from st in SessionToken,
      where: st.user_id == ^user.id and st.token != ^current_token
  )
end

@doc &quot;&quot;&quot;
Lists all active sessions for a user.

## Parameters
  - user: User struct
  - current_token: Current active session token

## Returns
  - List of session details
&quot;&quot;&quot;
def list_sessions(user, current_token) do
  SessionToken
  |&gt; where(user_id: ^user.id)
  |&gt; select([st], %{
    id: st.id,
    is_current: st.token == ^current_token
  })
  |&gt; Repo.all()
end
</code></pre>
<p>Let&#x27;s also update the <code>reset_user_password/2</code> method to delete all the session tokens for the user.</p>
<pre><code class="language-elixir">@doc &quot;&quot;&quot;
Resets the user password.

## Examples

    iex&gt; reset_user_password(user, %{password: &quot;new long password&quot;, password_confirmation: &quot;new long password&quot;})
    {:ok, %User{}}

    iex&gt; reset_user_password(user, %{password: &quot;valid&quot;, password_confirmation: &quot;not the same&quot;})
    {:error, %Ecto.Changeset{}}

&quot;&quot;&quot;
def reset_user_password(user, attrs) do
  Ecto.Multi.new()
  |&gt; Ecto.Multi.update(:user, User.password_changeset(user, attrs))
  |&gt; Ecto.Multi.delete_all(:tokens, UserToken.by_user_and_contexts_query(user, :all))
  |&gt; Ecto.Multi.delete_all(
    :session_tokens,
    from(t in SessionToken, where: t.user_id == ^user.id)
  )
  |&gt; Repo.transaction()
  |&gt; case do
    {:ok, %{user: user}} -&gt; {:ok, user}
    {:error, :user, changeset, _} -&gt; {:error, changeset}
  end
end
</code></pre>
<p>I find <code>Ecto.Multi.delete_all(:tokens, UserToken.by_user_and_contexts_query(user, :all)</code> to be sub-optimal as well, but I&#x27;ll leave it as is for now.</p>
<h2>Step 4: Update the <code>Accounts</code> context tests</h2>
<p>Last but not least, let&#x27;s update the tests. There&#x27;s not much to change here (intentionally), we just have to ensure we&#x27;re referencing the correct schema.</p>
<pre><code class="language-elixir">describe &quot;generate_user_session_token/1&quot; do
  setup do
    %{user: user_fixture()}
  end

  test &quot;generates a token&quot;, %{user: user} do
    token = Accounts.generate_user_session_token(user)
    assert user_token = Repo.get_by(SessionToken, token: token)

    # Creating the same token for another user should fail
    assert_raise Ecto.ConstraintError, fn -&gt;
      Repo.insert!(%SessionToken{
        token: user_token.token,
        user_id: user_fixture().id
      })
    end
  end
end

describe &quot;get_user_by_session_token/1&quot; do
  setup do
    user = user_fixture()
    token = Accounts.generate_user_session_token(user)
    %{user: user, token: token}
  end

  # other tests omitted
  test &quot;does not return user for expired token&quot;, %{token: token} do
    {1, nil} = Repo.update_all(SessionToken, set: [inserted_at: ~N[2020-01-01 00:00:00]])
    refute Accounts.get_user_by_session_token(token)
  end
end

describe &quot;reset_user_password/2&quot; do
  setup do
    %{user: user_fixture()}
  end

  # other tests omitted
  test &quot;deletes all tokens for the given user&quot;, %{user: user} do
    _ = Accounts.generate_user_session_token(user)
    {:ok, _} = Accounts.reset_user_password(user, %{password: &quot;new valid password&quot;})

    refute Repo.get_by(UserToken, user_id: user.id)
    refute Repo.get_by(SessionToken, user_id: user.id)
  end
end
</code></pre>
<h2>Step 5: Remove old session tokens</h2>
<p>One last thing, be sure to remove the leftover session tokens in the <code>users_tokens</code> table, any way you see fit.
I&#x27;m not going to provide an example here, as it&#x27;s a one-time operation. You can do it manually, or write a migration for it.</p>
<h2>Optional Step: Add some metadata</h2>
<p>Let&#x27;s make this change useful. We will expand the session token table, to include some metadata about the device that the user is using. The point here is to allow them inspect their active sessions, and see if there&#x27;s anything suspicious.</p>
<p>We start by generating a migration to add the fields.</p>
<pre><code class="language-elixir">defmodule MyApp.Repo.Migrations.AddDeviceMetadataToSessionTokens do
  use Ecto.Migration

  def change do
    alter table(:session_tokens) do
      add :device, :string, default: &quot;Unknown&quot;
      add :os, :string, default: &quot;Unknown&quot;
      add :browser, :string, default: &quot;Unknown&quot;
      add :browser_version, :string, default: &quot;&quot;
    end
  end
end
</code></pre>
<p>Next, we have to update the <code>SessionToken</code> schema to include the device metadata.</p>
<pre><code class="language-elixir">schema &quot;session_tokens&quot; do
  field :token, :binary
  field :device, :string
  field :os, :string
  field :browser, :string
  field :browser_version, :string

  belongs_to :user, User

  timestamps(type: :utc_datetime, updated_at: false)
end
</code></pre>
<p>Then, update the <code>changeset/1</code> method (now <code>changeset/2</code>) to accept these fields.</p>
<pre><code class="language-elixir">  def changeset(user, device_info) do
    token = TokenService.generate_token()

    attrs =
      %{
        token: token,
        user_id: user.id
      }
      |&gt; Map.merge(device_info)

    changeset =
      %__MODULE__{}
      |&gt; cast(attrs, [:token, :user_id, :device, :os, :browser, :browser_version])
      |&gt; validate_required([:token, :user_id, :device, :os, :browser, :browser_version])
      |&gt; foreign_key_constraint(:user_id)

    {token, changeset}
  end
</code></pre>
<p>Finally, we update <code>generate_user_session_token/1</code> method (now <code>generate_user_session_token/2</code>), and the <code>list_sessions/2</code>.</p>
<pre><code class="language-elixir">def generate_user_session_token(user, device_info) do
  {token, session_token_changeset} = SessionToken.changeset(user, device_info)
  Repo.insert!(session_token_changeset)
  token
end

def list_sessions(user, current_token) do
  SessionToken
  |&gt; where(user_id: ^user.id)
  |&gt; select([st], %{
    id: st.id,
    is_current: st.token == ^current_token,
    device: st.device,
    os: st.os,
    browser: st.browser,
    browser_version: st.browser_version
  })
  |&gt; Repo.all()
</code></pre>
<p>After that, we just have to parse our headers in our session controller, and extract the device info. Even if you&#x27;re using LiveView for everything, you can&#x27;t get away without having a session controller. So the same step applies.</p>
<p>In order to collect the device info, we can use various User-Agent parsers (like <code>UAParser</code>). That said, it&#x27;s not a foolproof method. For example Brave sometimes gets detected as Chrome. You need client-hints to get the real info there.</p>
<p>MacOS has also stopped updating the User-Agent string, so it&#x27;s not reliable. I believe it&#x27;s stuck on <code>10_15_7</code>, and it&#x27;s not going to change.</p>
<p>So, I won&#x27;t go into the details of how to parse the User-Agent string, it depends on how serious you want to get with this.</p>
<p>But essentially, you need something along these lines:</p>
<pre><code class="language-elixir">defmodule MyApp.UserSessionController do
  # other methods and imports omitted
  def create(conn, %{&quot;user&quot; =&gt; user_params}) do
    %{&quot;email&quot; =&gt; email, &quot;password&quot; =&gt; password} = user_params

    if user = Accounts.get_user_by_email_and_password(email, password) do
      device_info = extract_user_agent_info(conn)

      conn
      # Which passes it to `Accounts.generate_user_session_token/2`
      |&gt; UserAuth.log_in_user(user, user_params, device_info)
    else
      # In order to prevent user enumeration attacks, don&#x27;t disclose whether the email is registered.
      conn
      |&gt; put_flash(:error, &quot;You have entered an invalid email or password.&quot;)
      |&gt; redirect(to: ~p&quot;/users/log_in&quot;)
    end
  end

  def extract_user_agent_info(conn) do
    conn
    |&gt; Plug.Conn.get_req_header(&quot;user-agent&quot;)
    |&gt; List.first()
    |&gt; parse_user_agent()
  rescue
    _ -&gt; default_user_agent_info()
  end

  defp parse_user_agent(nil), do: default_user_agent_info()
  defp parse_user_agent(user_agent) do
    # parse the user agent
  end
end
</code></pre>
<h2>Final thoughts</h2>
<p>That&#x27;s it, we now have a separate table for session tokens, that can be easily extended with relevant metadata.</p>
<p>Now, let&#x27;s recap why we did it.</p>
<p>The generator, creates what&#x27;s effectively a &quot;God Table&quot;, it kind of knows too much, and handles different concerns.</p>
<p>When you get a session token, you notice a <code>sent_to</code>. If you are not aware of the generator structure, and you see this, you have every right to be concerned. Session management,
has nothing to do with email delivery.
This is a classic example of a leaky abstraction, where implementation details of one concern are leaking into another.</p>
<p>So we separate them.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Import, Require, Use, or Alias?]]></title>
            <link>https://dnlytras.com/blog/elixir-import-require-use-alias</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/elixir-import-require-use-alias</guid>
            <pubDate>Thu, 29 Aug 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Clarifying the difference between different directives in Elixir]]></description>
            <content:encoded><![CDATA[<p>Coming from a decade of work in JavaScript, having these four different directives in Elixir felt strange at first. <em>Wait, what? <code>use</code> is not just a directive, but actually a macro?</em> Ok, things are really confusing.</p>
<p>Let’s try to clarify them.</p>
<h2>Alias</h2>
<p><code>alias</code> allows us to use a shorter name for a module, making it easier to reference in the code. So, when a module has a long (nested) name, we can omit the full name by using an alias. Instead, we only need to type the last part of the module&#x27;s name.</p>
<pre><code class="language-elixir">defmodule MyApp.Workers.LockUnverifiedAccountsWorker do
  use Oban.Worker, queue: :unverified_accounts

  # Let’s look at how to use alias
  alias MyApp.Domain.Accounts
  alias MyApp.Services.AdminNotifications

  @impl Oban.Worker
  def perform(_job) do
    # Using `Accounts` instead of `MyApp.Domain.Accounts`
    {locked_count, locked_users} = Accounts.lock_expired_unverified_accounts()

    for user &lt;- locked_users do
      # Using `AdminNotifications` instead of `MyApp.Services.AdminNotifications`
      AdminNotifications.notify(:user_locked, user)
    end

    {:ok, locked_count}
  end
end
</code></pre>
<p>We can also specify a custom <code>alias</code> using the <code>:as</code> option. This can be useful when we have multiple modules with the same name or when we want to give the alias a more descriptive name.</p>
<pre><code class="language-elixir">alias MyApp.Services.AdminNotifications, as: Snitch

Snitch.notify()
</code></pre>
<p>Pretty straightforward. With <code>alias</code> we can remove the extra baggage of long module names and make our code more readable. More about aliasing in Elixir can be found <a href="https://hexdocs.pm/elixir/Kernel.SpecialForms.html#alias/2">here</a>.</p>
<h2>Import</h2>
<p><code>import</code> allows us to bring functions or macros (more on macros later) from another module into the current scope without prefixing them with the module name.</p>
<p>Let’s see how we can use <code>import</code> in the same example.</p>
<pre><code class="language-elixir">defmodule MyApp.Workers.LockUnverifiedAccountsWorker do
  use Oban.Worker, queue: :unverified_accounts

  # Let’s focus on these again
  import MyApp.Domain.Accounts, only: [lock_expired_unverified_accounts: 0] # `0` means we are importing the function with 0 arity, i.e. no arguments.
  import MyApp.Services.AdminNotifications

  @impl Oban.Worker
  def perform(_job) do
    # No need to prefix with `Accounts`.
    # We only imported `lock_expired_unverified_accounts` from it
    {locked_count, locked_users} = lock_expired_unverified_accounts()

    for user &lt;- locked_users do
      # No need to prefix with `AdminNotifications`,
      # but as a side-effect, we have access to all its functions, for better or worse
      notify(:user_locked, user)
    end

    {:ok, locked_count}
  end
end
</code></pre>
<p>There’s a downside to using <code>import</code>, though. Sometimes it’s not immediately clear where a function is coming from. So if you’re using <code>import</code>, opt for <code>only</code> or <code>except</code> to avoid naming conflicts. In this example everything that <code>AdminNotifications</code> has, is now available in the current scope, which can be a bit dangerous.</p>
<p>I try to avoid <code>import</code> but make an exception for tests. For example, in my tests, I don’t mind importing all the fixtures I wrote, but in the actual code, I prefer explicitness, even if things get a bit lengthy.</p>
<p>Read more about <code>import</code> <a href="https://hexdocs.pm/elixir/Kernel.SpecialForms.html#import/2">here</a>.</p>
<h2>Require</h2>
<p><a href="https://hexdocs.pm/elixir/macros.html">Macros</a> are a metaprogramming primitive; code that generates code. They are executed at compile time, contrary to functions that get called at runtime.</p>
<p>So, if we want to use a module with macros, we need to <code>require</code> it first so it will be available at compile time.</p>
<p>Let’s create a simple module with a macro :</p>
<pre><code class="language-elixir">defmodule MyApp.Helpers.LogThisPlease do
  # In our compiled code, this will be replaced with `IO.puts(&quot;Hello, world!&quot;)`
  # A normal function instead, would be referenced and executed at the runtime
  defmacro info(message) do
    quote do
      IO.puts &quot;[INFO] #{unquote(message)}&quot;
    end
  end
end
</code></pre>
<p>And let’s put it to use in our worker module.</p>
<pre><code class="language-elixir">defmodule MyApp.Workers.LockUnverifiedAccountsWorker do
  use Oban.Worker, queue: :unverified_accounts

  alias MyApp.Domain.Accounts
  alias MyApp.Services.AdminNotifications

  # We need to require the module with the macro
  require MyApp.Helpers.LogThisPlease

  @impl Oban.Worker
  def perform(_job) do
    {locked_count, locked_users} = Accounts.lock_expired_unverified_accounts()
    # Using the full name of the module
    MyApp.Helpers.LogThisPlease.info(&quot;Found #{locked_count} users to lock&quot;)

    MyApp.Helpers.LogThisPlease.info(&quot;Locking users&quot;)
    for user &lt;- locked_users do
      AdminNotifications.notify(:user_locked, user)
      MyApp.Helpers.LogThisPlease.info(&quot;User locked: #{user.id}&quot;)
    end

    {:ok, locked_count}
  end
end
</code></pre>
<p>Everything works, but it looks pretty busy.</p>
<p>If we want, we can replace <code>require</code> with <code>import</code>. By doing this <code>import</code> pulls everything from the module into the current scope, and implicitly runs <code>require</code> for us.</p>
<p>This isn’t true for <code>alias</code> as that only creates a shorter alias for the module, and we still need to <code>require</code> it ourselves.</p>
<p>More about <code>require</code> can be found <a href="https://hexdocs.pm/elixir/Kernel.SpecialForms.html#require/2">here</a>.</p>
<h2>Use</h2>
<p>Speaking of macros, <code>use</code> is a macro itself. It invokes the <code>__using__/1</code> macro of another module. This is often used to set up behaviors or inject code into the current module.</p>
<p>That&#x27;s a bit abstract, so let’s revisit our previous example.</p>
<pre><code class="language-elixir">defmodule MyApp.Helpers.LogThisPlease do
  defmacro __using__(_opts) do
    quote do
      def info(message) do
        IO.puts &quot;[INFO] #{message}&quot;
      end
    end
  end
end
</code></pre>
<p>Now, we can use this module in our worker.</p>
<pre><code class="language-elixir">
defmodule MyApp.Workers.LockUnverifiedAccountsWorker do
  use Oban.Worker, queue: :unverified_accounts

  alias MyApp.Domain.Accounts
  alias MyApp.Services.AdminNotifications

  use MyApp.Helpers.LogThisPlease

  @impl Oban.Worker
  def perform(_job) do
    {locked_count, locked_users} = Accounts.lock_expired_unverified_accounts()
    info(&quot;Found #{locked_count} users to lock&quot;)

    info(&quot;Locking users&quot;)
    for user &lt;- locked_users do
      AdminNotifications.notify(:user_locked, user)
      info(&quot;User locked: #{user.id}&quot;)
    end

    {:ok, locked_count}
  end
end
</code></pre>
<p>By doing this, we eliminated the need to call <code>MyApp.Helpers.LogThisPlease.info</code> and can now call <code>info</code> directly. It is a typical pattern in Elixir, especially in libraries. For example, <a href="https://github.com/sorentwo/oban">Oban</a> uses <code>use Oban.Worker</code> to set up all the necessary plumbing to make this module work as an Oban job.</p>
<p>Of course, it poses the same problem as <code>import</code> - it’s not immediately clear where the function comes from.</p>
<hr/>
<blockquote>TL;DR:<ol>
<li><strong>Alias</strong> provides shorter names for modules, making them easier to reference.</li>
<li><strong>Import</strong> brings specific or all functions and macros into the current scope.</li>
<li><strong>Require</strong> ensures a module is loaded and is necessary for using macros.</li>
<li><strong>Use</strong> invokes the <code>__using__/1</code> macro of another module, often used for setting up behaviors or injecting code.</li>
</ol></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[The full-stack framework discourse]]></title>
            <link>https://dnlytras.com/blog/fullstack-discourse</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/fullstack-discourse</guid>
            <pubDate>Sat, 18 May 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[JavaScript, Laravel, Rails, influencers & you]]></description>
            <content:encoded><![CDATA[<p>Every now and then, the topic of full-stack frameworks comes up. The debate is often framed as a choice between anything hot in the JavaScript world versus Laravel, Rails, Django, and so on. And it’s pretty tiring.</p>
<p>The JavaScript community has a flaw. It’s too big. There’s a lot of money to be made, and every new solution has a marketing push that makes it seem like the best thing since sliced bread.</p>
<p>For example, Next.js (which I mostly enjoy using) is advertised on Twitter circles as the best full-stack framework, with a very annoying and loud marketing push.</p>
<p>A full-stack framework has a lot of responsibilities. How do you handle emails? Writing templates? How do you test them? What about:</p>
<ul>
<li>Background jobs</li>
<li>File uploads</li>
<li>Authorization</li>
<li>User impersonation</li>
<li>Multi-tenancy</li>
<li>Tracing</li>
<li>Logs</li>
</ul>
<p>When there’s a lot of buzz, people are curious how exactly the JavaScript ecosystem is solving these problems. And this is where the friction comes in.</p>
<hr/>
<p>I don’t love opinionated full-stack frameworks, but I have built projects on the side with Laravel and Django, and they were a joy to use. There’s an undeniable productivity boost with these battle-tested frameworks. And it’s also a fantastic point of reference when you’re starting out.</p>
<p>But they are not perfect. No matter which client-side solutions they push, they don’t hold a candle to the JavaScript ecosystem. That&#x27;s subjective, of course, but that&#x27;s my experience.</p>
<p>Full-stack frameworks also ask you to commit to a certain way of doing things. Personally, I’m closer to the functional programming camp, and I don’t mind tweaking my stack to fit my quirks. But I know what I must re-implement when not using a full-stack framework, and I respect their solutions.</p>
<p>So what irks me is that these discussions lack nuance. There&#x27;s no real discussion happening. It&#x27;s just chatter that confuses beginners and makes them feel like they have to pick a side. Both cases have their merits.</p>
<hr/>
<p>Ultimately, I’m not here to tell you what to use. Context matters, and different people build different projects. I just want you to be aware of the biases of the people pushing you to use their tools.</p>
<p>For example, if a company pushes you to use its framework and promotes another service for its missing features, how would you feel if half their team has angel investments in that service? Do they care for your app, or do they want to pad their user stats?</p>
<p>If an influencer has that secret silver bullet tool to share with you, how can they vouch for it without shipping anything?</p>
<p>Do all these people have your best interest in mind or their own? And how do you know?</p>
<p>Everyone wants to make you a fan, but don’t take sides; it’s pointless. Be critical of the tools you use and prioritize sanity for yourself and your team.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[My impressions of Effect-TS]]></title>
            <link>https://dnlytras.com/blog/effect-ts</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/effect-ts</guid>
            <pubDate>Fri, 09 Feb 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Working with Effect-TS, a powerful library for building applications in TypeScript]]></description>
            <content:encoded><![CDATA[<blockquote>when will TS be able to trace throws and show you what errors a function might throw. this feature would also help the whole ecosystem to throw properly typed errors<div><div>— <!-- -->Dax (@thdxr)</div><a class="font-medium underline text-accent hover:brightness-120 underline-offset-2 decoration-1 focus:outline-hidden focus-visible:ring-1 focus-visible:ring-accent rounded-sm focus-visible:ring-opacity-75 transition-colors duration-150 ease-in-out" href="https://twitter.com/thdxr/status/1621290194124013568" rel="nofollow noreferrer noopener" target="_blank">source</a></div></blockquote>
<p>Rapid fire questions:</p>
<ul>
<li><strong>Why use Effect-ts?:</strong> Because I want to treat errors as values and nicely handle them.</li>
<li><strong>Is it functional programming mambo jambo?:</strong> I write my code using Effect-ts in an imperative way. It’s inspired by functional libraries, it gives you all the tools to write in a functional way, but it’s not dogmatic. If anything, sometimes using Classes is more convenient.</li>
<li><strong>Do I have to rewrite everything?</strong>: No, you can scope it to a part of your app, or even a single function. I purposely avoid using it everywhere and keep it to the parts of my app that make sense for it.</li>
<li><strong>Is it hard to learn?</strong>: It’s not easy, but it’s not hard. It depends on how deep you want to go. The documentation is good, and the community is very helpful.</li>
</ul>
<p>Alright, let’s dive in.</p>
<h2>Error handling</h2>
<p>Let’s take a look at this example:</p>
<pre><code class="language-ts">const validationSchema = Schema.Struct({
  email: emailSchema,
  role: membershipRoleSchema,
});

export type CreateInvitationProps = Schema.Schema.Type&lt;typeof validationSchema&gt;;

export function createInvitation({pool, db}: {pool: PgPool; db: DB}) {
  function execute({
    props: {email, role},
    userId,
    orgId,
  }: {
    props: CreateInvitationProps;
    userId: User[&#x27;id&#x27;];
    orgId: Org[&#x27;id&#x27;];
  }) {
    return Effect.gen(function* () {
      yield* Effect.log(
        `(create-invitation): Creating invitation for ${email}`
      );

      // Check if the user can create an invitation
      yield* invitationAuthorizationService({pool, db}).canCreate({
        userId,
        orgId,
      });

      // Generate a UUID for the invitation
      const invitationId = yield* generateUUID();

      // Delete any previous invitations for the same email
      yield* Effect.tryPromise({
        try: () =&gt;
          db
            .deletes(&#x27;membership_invitations&#x27;, {org_id: orgId, email})
            .run(pool),
        catch: () =&gt; new DatabaseError(),
      });

      // Check if the user is already a member
      const existingMember = yield* Effect.tryPromise({
        try: () =&gt;
          db
            .selectOne(
              &#x27;users&#x27;,
              {email: email},
              {
                lateral: {
                  membership: db.selectOne(&#x27;memberships&#x27;, {
                    org_id: orgId,
                    user_id: db.parent(&#x27;id&#x27;),
                  }),
                },
              }
            )
            .run(pool),
        catch: () =&gt; new DatabaseError(),
      });

      // If the user is already a member, return an error
      if (existingMember?.membership) {
        return yield* Effect.fail(
          new InviteeAlreadyMemberError({
            inviterId: userId,
            orgId,
            inviteeEmail: existingMember.email,
            inviteeId: existingMember.id,
          })
        );
      }

      const orgRecord = yield* Effect.tryPromise({
        try: () =&gt; db.selectOne(&#x27;orgs&#x27;, {id: orgId}).run(pool),
        catch: () =&gt; new DatabaseError(),
      });

      if (!orgRecord) {
        return yield* Effect.fail(new OrgNotFoundError());
      }

      // Insert the invitation
      const invitationRecord = yield* Effect.tryPromise({
        try: () =&gt;
          db
            .insert(&#x27;membership_invitations&#x27;, {
              id: invitationId,
              role: role,
              email: email,
              org_id: orgId,
            })
            .run(pool),
        catch: () =&gt; {
          return new DatabaseError();
        },
      });

      const invitation = yield* MembershipInvitation.fromRecord({
        record: invitationRecord,
        org: {
          slug: orgRecord.slug,
          name: orgRecord.name,
          id: orgRecord.id,
        },
      });

      return invitation;
    }).pipe(
      Effect.catchTags({
        DatabaseError: () =&gt;
          Effect.fail(new InternalServerError({reason: &#x27;Database error&#x27;})),
        MembershipInvitationParse: () =&gt;
          Effect.fail(
            new InternalServerError({
              reason: &#x27;Membership invitation parse error&#x27;,
            })
          ),
        UUIDGenerationError: () =&gt;
          Effect.fail(
            new InternalServerError({reason: &#x27;UUID generation error&#x27;})
          ),
      })
    );
  }

  const validate = schemaResolver(validationSchema);

  return {
    execute,
    validate,
  };
}
</code></pre>
<p>It’s a big one. I’m not going to &quot;Hello World&quot; you. I know I’m mixing concerns; some might say I should split it into smaller functions, pull an ORM, or do things differently. But I’m not here to talk about that.</p>
<p>Chances are that generators aside, you can read it just fine. There’s some syntax sugar, but it’s familiar.</p>
<ul>
<li>I log some stuff</li>
<li>I check if the User can create an invitation (throws <code>ForbiddenActionError</code>) otherwise</li>
<li>I generate a UUID (throws <code>UUIDGenerationError</code>, impossible to happen IRL, but maybe my version of <code>uuid</code> is broken)</li>
<li>I delete any previous invitations (throws <code>DatabaseError</code>, could be much more refined)</li>
<li>I check if the User is already a member (throws <code>InviteeAlreadyMemberError</code>)</li>
<li>I check if the Org exists (throws <code>OrgNotFoundError</code>) - unlikely to happen</li>
<li>I insert the invitation (throws <code>DatabaseError</code>)</li>
<li>I parse the invitation (throws <code>MembershipInvitationParse</code>) to the types that the frontend understands</li>
<li>I return the invitation</li>
</ul>
<p>If I hover over <code>execute</code>, I see this function signature (you might have to scroll a bit):</p>
<pre><code class="language-ts">(local function) execute({ props: { email, role }, userId, orgId, }: {
    props: CreateInvitationProps;
    userId: User[&#x27;id&#x27;];
    orgId: Org[&#x27;id&#x27;];
}): Effect.Effect&lt;MembershipInvitation, InviteeAlreadyMemberError | OrgNotFoundError | ForbiddenActionError | InternalServerError, never&gt;
</code></pre>
<p>The above means that my function has the following properties:</p>
<ul>
<li>Returns a <code>MembershipInvitation</code></li>
<li>Can error with <code>InternalServerError</code>, <code>ForbiddenActionError</code>, <code>InviteeAlreadyMemberError</code>, <code>OrgNotFoundError</code></li>
<li>No dependencies (<code>never</code> as the last type parameter. Let’s discuss that in a second)</li>
</ul>
<p>For me, this is a big deal. I can see at a glance what my function does and what can go wrong. I don’t have to read the implementation to identify the errors. More importantly, I don’t have to be defensive when using this function. I have a clear contract.</p>
<p>In the example, even though my other functions throw <code>UUIDGenerationError</code> and <code>MembershipInvitationParse</code>, I don’t find these errors meaningful for the consumer. So I catch them and group them under <code>InternalServerError</code>. The rest can go through. The consumer then can decide on the appropriate server response based on what happened.</p>
<ul>
<li><code>ForbiddenActionError</code> -&gt; 403</li>
<li><code>InviteeAlreadyMemberError</code> -&gt; 409</li>
<li><code>OrgNotFoundError</code> -&gt; 404 (or even a full redirect to <code>/login</code>)</li>
<li><code>InternalServerError</code> -&gt; 500</li>
</ul>
<hr/>
<p>Now, let’s take a closer look at one of these errors. Specifically the <code>InternalServerError</code>. I defined it as a class and can pass a <code>reason</code> and <code>metadata</code> to it. Nothing will reach the consumer; they are meant to be logged and reported.</p>
<pre><code class="language-ts">export class InternalServerError extends Data.TaggedClass(
  &#x27;InternalServerError&#x27;
)&lt;{
  readonly reason?: string;
  readonly metadata?: unknown;
}&gt; {
  constructor(props: {reason?: string; metadata?: unknown}) {
    super(props);
    console.log(&#x27;[InternalServerError]&#x27;, props);
    // send to reporting service
  }
}

</code></pre>
<h2>Schema</h2>
<p>As you move further into Effect-ts, you might notice that it also makes various other libraries obsolete. One of them is <code>zod</code> which I can replace with <code>Schema</code>. Let’s take a look at my <code>User</code> class:</p>
<pre><code class="language-ts">import type {ParseError} from &#x27;@effect/schema/ParseResult&#x27;;
import * as Schema from &#x27;@effect/schema/Schema&#x27;;
import {Data, Effect} from &#x27;effect&#x27;;
import {compose} from &#x27;effect/Function&#x27;;
import type {users} from &#x27;zapatos/schema&#x27;;

import {db} from &#x27;~/core/db/schema.server.ts&#x27;;

import {emailSchema} from &#x27;./email.server.ts&#x27;;
import {uuidSchema} from &#x27;./uuid.server.ts&#x27;;

class UserIdParseError extends Data.TaggedError(&#x27;UserIdParseError&#x27;)&lt;{
  cause: ParseError;
}&gt; {}

class UserParseError extends Data.TaggedError(&#x27;UserParseError&#x27;)&lt;{
  cause: ParseError;
}&gt; {}

export const userNameSchema = Schema.Trim.pipe(
  Schema.minLength(2, {
    message: () =&gt; &#x27;Name must be at least 2 characters&#x27;,
  }),
  Schema.maxLength(100, {
    message: () =&gt; &#x27;Name cannot be more than 100 characters&#x27;,
  })
);

const UserIdBrand = Symbol.for(&#x27;UserIdBrand&#x27;);
export const userIdSchema = uuidSchema.pipe(Schema.brand(UserIdBrand));

export class User extends Schema.Class&lt;User&gt;(&#x27;User&#x27;)({
  id: userIdSchema,
  name: userNameSchema,
  email: emailSchema,
  emailVerified: Schema.Boolean,
  createdAt: Schema.Date,
  updatedAt: Schema.Date,
}) {
  static fromUnknown = compose(
    Schema.decodeUnknown(this),
    Effect.mapError((cause) =&gt; new UserParseError({cause}))
  );

  static fromRecord(record: users.JSONSelectable) {
    return User.fromUnknown({
      id: record.id,
      name: record.name,
      email: record.email,
      emailVerified: record.email_verified,
      createdAt: record.created_at,
      updatedAt: record.updated_at,
    });
  }

  getRecord() {
    return {
      id: this.id,
      name: this.name,
      email: this.email,
      email_verified: this.emailVerified,
      updated_at: db.toString(this.updatedAt, &#x27;timestamptz&#x27;),
      created_at: db.toString(this.createdAt, &#x27;timestamptz&#x27;),
    };
  }
}

export const parseUserId = compose(
  Schema.decodeUnknown(userIdSchema),
  Effect.mapError((cause) =&gt; new UserIdParseError({cause}))
);
</code></pre>
<p>Again too much code, but I want to show you the <code>Schema</code> part. I define the schema for my <code>User</code> class, and then I can use it to parse a database record or an unknown object.</p>
<p>Just like that...</p>
<pre><code class="language-ts">// Effect.Effect&lt;User, UserParseError, never&gt;
const user = User.fromUnknown(unknownObject);
</code></pre>
<p>If something odd happens, I get a <code>UserParseError</code>, and I can handle it accordingly.</p>
<p>You might noticed that I use <code>Brand</code> to create a new type for my <code>UserId</code>. It is a convenient feature, and I blogged about it <a href="/blog//nominal-types">here</a>. Essentially, I never want to confuse a <code>UserId</code> with an <code>OrgId</code> or an <code>AnnouncementId</code>. They are all UUIDs, but they are identifiers of different resources.</p>
<hr/>
<p>Let’s take a look at a password module. I use <code>bcrypt</code> to hash and compare passwords and <code>Schema</code> to validate them.</p>
<pre><code class="language-ts">import type {ParseError} from &#x27;@effect/schema/ParseResult&#x27;;
import * as Schema from &#x27;@effect/schema/Schema&#x27;;
import bcrypt from &#x27;bcryptjs&#x27;;
import {Data, Effect} from &#x27;effect&#x27;;
import {compose, pipe} from &#x27;effect/Function&#x27;;

const SALT_ROUNDS = 10;
const PasswordBrand = Symbol.for(&#x27;PasswordBrand&#x27;);

export const passwordSchema = Schema.String.pipe(
  Schema.minLength(8, {
    message: () =&gt; &#x27;Password should be at least 8 characters long&#x27;,
  }),
  Schema.maxLength(100, {
    message: () =&gt; &#x27;Password should be at most 100 characters long&#x27;,
  }),
  Schema.brand(PasswordBrand)
);

export type Password = Schema.Schema.Type&lt;typeof passwordSchema&gt;;

export class PasswordHashError {
  readonly _tag = &#x27;PasswordHashError&#x27;;
}
class PasswordParseError extends Data.TaggedError(&#x27;PasswordParseError&#x27;)&lt;{
  cause: ParseError;
}&gt; {}

export const parsePassword = compose(
  Schema.decodeUnknown(passwordSchema),
  Effect.mapError((cause) =&gt; new PasswordParseError({cause}))
);

export function hashPassword(password: Password) {
  return pipe(
    Effect.tryPromise(() =&gt; bcrypt.hash(password, SALT_ROUNDS)),
    Effect.mapError(() =&gt; new PasswordHashError())
  );
}

export function comparePasswords({
  plainText,
  hashValue,
}: {
  plainText: string;
  hashValue: string;
}) {
  return Effect.tryPromise({
    try: () =&gt; bcrypt.compare(plainText, hashValue),
    catch: () =&gt; new PasswordHashError(),
  });
}
</code></pre>
<h2>And this is where I stop</h2>
<p>I mentioned that Effect-ts has a lot of features (<a href="https://effect.website/docs/concurrency/queues">Queues</a>
, <a href="https://effect.website/docs/concurrency/pubsub">PubSub</a>
, <a href="https://effect.website/docs/concurrency/schedule">Scheduling</a>
, <a href="https://effect.website/docs/streaming/stream/creating">Streams</a>
, <a href="https://effect.website/docs/context-management/layers">Dependency Injection</a>, etc)</p>
<p>Here’s the deal. I showed you some of the parts that I use. For me, they work nicely, and they solve my problems. But I’m certain that I’m not doing things the conventional way. Here’s an example:</p>
<p>Remember this type signature?</p>
<pre><code class="language-ts">Effect.Effect&lt;MembershipInvitation, InviteeAlreadyMemberError | OrgNotFoundError | ForbiddenActionError | InternalServerError, never&gt;
</code></pre>
<p>I said the last type parameter is a union of the dependencies, and in this case, it’s empty, just <code>never</code>. I’m using a database, and a query builder (specifically the <a href="https://jawj.github.io/zapatos/">Zapatos library</a>), so why is it not listed? Effect-ts offers a way to do dependency injection, and it’s nice. But I prefer not to do it that way. I pass the <code>pool</code> and <code>db</code> as arguments to my function instead.</p>
<hr/>
<p>I’m trying to figure out how to put this; I want to use Effect-ts but not to be tied to it. While I was trying to understand and use the library, I found myself in a rabbit hole.
I was always searching for how to do it the <strong>right way</strong>. And this led me to look at other people&#x27;s snippets and repos, and I became overwhelmed. I didn’t understand half of it, and I always thought I was doing everything wrong.</p>
<p>Instead of using a library, I spent my time trying to learn a new framework, refactoring code, and rewiring my brain. My side project was on hold, and I was not productive. (Oh, the horror)</p>
<p>I decided to use Effect-ts for validation and error handling. Essentially write the annoying code that can be modeled more efficiently with Effect, and use <a href="https://effect.website/docs/essentials/running#runpromiseexit"><code>runPromiseExit</code></a> to get the result. I’m losing a lot of the library&#x27;s power, but I also reduce the surface of the things I have to learn and maintain.</p>
<p>If things don’t work out, I can keep the Error classes, replace the validation library, and remove the <code>yield*</code> and <code>Effect.gen</code>, and I’m back where I started. I won’t have invested that much.</p>
<p>Don’t get me wrong, I can only speak highly of the Effect and am grateful for the maintainers. If anything, I hope it will get more traction, as it’s a fantastic piece of software. If my team was using Effect, I would 100% use it to its full potential and I would be singing a different song.</p>
<blockquote>i love @EffectTS_ it’s the exact mental model i want but regrettably i&#x27;ll never put it in a serious application. Libraries that make your code look non-standard always ends up creating pain down the road. Hardest part about js is convincing devs to use fewer tools<div><div>— <!-- --> Dax (@thdxr)</div><a class="font-medium underline text-accent hover:brightness-120 underline-offset-2 decoration-1 focus:outline-hidden focus-visible:ring-1 focus-visible:ring-accent rounded-sm focus-visible:ring-opacity-75 transition-colors duration-150 ease-in-out" href="https://twitter.com/thdxr/status/1681145294078066688" rel="nofollow noreferrer noopener" target="_blank">source</a></div></blockquote>
<h2>Community</h2>
<p>Effect has a vibrant community; you can find them on <a href="https://discord.com/invite/effect-ts">Discord</a>.</p>
<p>I have only good things to say. I have asked a ton of stupid questions and always got a response. I even had feature requests that were implemented in a matter of days.</p>
<p>There are also great people tweeting about it, so you can follow them and get some insights. Here are some of them:</p>
<ul>
<li><a href="https://twitter.com/MichaelArnaldi">Michael Arnaldi</a></li>
<li><a href="https://twitter.com/jbmusso">Jean-Baptiste Musso</a></li>
<li><a href="https://twitter.com/c9antoine">Antoine Coulon</a></li>
<li><a href="https://twitter.com/Patrick_Roza">Patrick Roza</a></li>
<li><a href="https://twitter.com/schickling">Johannes Schickling</a></li>
</ul>
<hr/>
<blockquote>Resources:<ul>
<li><a href="https://effect.website/">Documentation</a></li>
<li><a href="https://effect.website/blog/">Blog</a></li>
<li><a href="https://discord.com/invite/effect-ts">Discord</a></li>
<li><a href="https://github.com/antoine-coulon/effect-introduction">Effect-introduction (github.com)</a> (recommended)</li>
<li><a href="https://www.sandromaglione.com/articles/from-fp-ts-to-effect-ts-migration-guide">From FP-TS to effect-ts (sandromaglione.com)</a></li>
<li><a href="https://www.youtube.com/watch?v=fTN8BX5qj6s">effect for beginners (youtube.com)</a></li>
<li><a href="https://www.linkedin.com/pulse/quick-thoughts-effect-ts-jesse-warden/">Quick thoughts on EffectTS (linkedin.com)</a></li>
<li><a href="https://github.com/effect-ts-app/boilerplate">effect-ts-app/boilerplate (github.com)</a></li>
<li><a href="https://github.com/tim-smart/sqlfx">sqlfx (github.com)</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Working with Remix]]></title>
            <link>https://dnlytras.com/blog/working-with-remix</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/working-with-remix</guid>
            <pubDate>Mon, 11 Sep 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The conventions I use when working with Remix]]></description>
            <content:encoded><![CDATA[<p>These past months I&#x27;ve been working with Remix. The release of <a href="https://nextjs.org/blog/next-13">Next.js 13</a> and its follow-ups didn’t make much impression on me, and if anything they were kind of a deal breaker. As a result, I decided to take a closer look at Remix, and I was blown away by its ease of use.</p>
<p>In short, I find the Remix approach simpler. I write my backend, and I use the <code>loader</code> &amp; <code>action</code> functions to communicate with React. I’m oversimplifying, but that&#x27;s all I perceive. In reality, it’s a well thought, and complex framework that does such a good job hiding all the complexity from me. And it just works, everywhere.</p>
<p>Anyway, let’s go through some conventions I decided on.</p>
<blockquote>Contents:<ul>
<li><a href="#throw-in-loaders-return-in-actions">Throw in loaders, return in actions</a></li>
<li><a href="#parse-formdata">Parse formData</a></li>
<li><a href="#improved-response-types">Improved response types</a></li>
<li><a href="#parse-env-variables-on-load">Parse ENV variables on load</a></li>
<li><a href="#feature-organization--business-logic">Feature organization &amp; business logic</a></li>
<li><a href="#final-thoughts">Final thoughts</a></li>
</ul></blockquote>
<h2>Throw in loaders, return in actions</h2>
<p>By reading the documentation you might have noticed some snippets like this one:</p>
<pre><code class="language-ts">export async function loader({request, params}: LoaderArgs) {
  // ...
  if (invoice === null) {
    throw json(&#x27;Not Found&#x27;, {status: 404}); // new Response with `application/json` header
  }
}
</code></pre>
<blockquote><p>Take a look at the docs for <a href="https://remix.run/docs/en/main/utils/redirect">redirect</a> &amp; <a href="https://remix.run/docs/en/main/utils/json">json</a>, if you’re not familiar</p></blockquote>
<p>Now a question is what&#x27;s the difference between throwing and returning a response?</p>
<p>When it comes to <strong>redirects</strong> (30X status codes), there’s no visible difference. It’s down to semantics. Let’s see two examples:</p>
<ul>
<li><strong>Password reset success</strong> - Let’s take the user to the login page <code>return redirect(&#x27;/login&#x27;)</code></li>
<li><strong>No session found</strong> - Nope. No access for you <code>throw redirect(&#x27;/login&#x27;)</code></li>
</ul>
<p>By throwing, you let the next developer understand that the redirect is forced due to some bad request.</p>
<p>When it comes to every other response (non-30X status) though, it’s all about how you handle the errors. When throwing a response, <strong>we diverge from the happy path</strong>, and the error is handled in the closest <a href="https://remix.run/docs/en/main/guides/errors">ErrorBoundary</a>. This isn’t very helpful at times, as you don’t want to show an <code>ErrorBoundary</code> if a username is taken.</p>
<p>I usually like to do the following:</p>
<ol>
<li><strong>Throw in loaders</strong>. The page can’t be rendered properly. Many things can happen here. (Missing permissions, database errors, etc.) The page is broken, so rendering the <code>ErrorBoundary</code> makes sense.</li>
<li><strong>Return in actions</strong>. Handle the errors as values, and depending on the severity render the appropriate UI. Validation error? Inline the error message. Database hiccup? Could be a toast.</li>
</ol>
<p>Of course, this isn’t set in stone, but a good rule of thumb for me.</p>
<h2>Parse formData</h2>
<p>What fresh air to just use uncontrolled components, and gather everything in a FormData instance, right? Unfortunately, everything you pick from it is typed as <code>FormDataEntryValue | null</code>.</p>
<p>There are all sorts of validation libraries you can use, but I like to keep it simple. I gather the formData, parse them on the server with <a href="https://github.com/colinhacks/zod">zod</a>, and return validation errors instead.</p>
<p>Something like this:</p>
<pre><code class="language-ts">export async function action({request}: {request: Request}) {
  const formData = await request.formData();
  const validation = validate(Object.fromEntries(formData));

  if (!validation.success) {
    return json({ok: false}, {status: 422});
  }

  // continue
}
</code></pre>
<p>Unrelated to Remix, I’m experimenting with <a href="https://effect.website/">effect-ts</a>, so here’s what I really do:</p>
<pre><code class="language-ts">export const action = withAction(
  Effect.gen(function* (_) {
    const {request} = yield* _(ActionArgs);
    const formData = yield* _(Effect.promise(() =&gt; request.formData()));

    const {validate, execute} = resetPassword();
    const props = yield* _(validate(Object.fromEntries(formData)));

    yield* _(execute(props));

    return new Redirect({
      to: &#x27;/login?resetPassword=true&#x27;,
      init: request,
    });
  }).pipe(
    Effect.catchTags({
      InternalServerError: () =&gt; Effect.fail(new ServerError({})),
      UserNotFoundError: () =&gt;
        Effect.fail(new BadRequest({errors: [&#x27;User not found&#x27;]})),
      PasswordResetTokenNotFoundError: () =&gt;
        Effect.fail(new BadRequest({errors: [&#x27;Token not found&#x27;]})),
      ValidationError: (error) =&gt;
        Effect.fail(new BadRequest({errors: error.messages})),
    })
  )
);
</code></pre>
<p>If this scares you ignore it. What I want to highlight is how flexible Remix is. If I want to overengineer something, I can do so.</p>
<h2>Improved response types</h2>
<p>I find the infered types that <code>useLoaderData</code>, <code>useActionData</code> &amp; <code>useFetcher</code> return to be <em>slightly</em> incorrect (<a href="https://github.com/remix-run/remix/issues/3931">Issue-3931</a>).
For that case I use <a href="https://github.com/kiliman/remix-typedjson">remix-typedjson</a> and its hooks. So instead of <code>useFetcher</code> I use <code>useTypedFetcher</code> and so on.</p>
<pre><code class="language-ts">// (taken from remix-typedjson docs)
const fetcher = useTypedFetcher&lt;typeof action&gt;();
fetcher.data; // data property is fully typed
</code></pre>
<p>I’m on the fence about this, as I’m using a &quot;patched&quot; version of Remix&#x27;s helpers. There might be dragons here, so feel free to ignore this advice.</p>
<h2>Parse ENV variables on load</h2>
<p>This isn’t something groundbreaking, or strictly related to Remix, but I never notice it when reading similar introductory articles.</p>
<p>Essentially, don’t do this:</p>
<pre><code class="language-ts">const SESSION_SECRET = process.env.SESSION_SECRET as string;
</code></pre>
<p>Instead, parse <code>process.ENV</code>, and let the application crash if something is missing. You don’t want to have your app running, only to find out that your emails were never sent.</p>
<p>Here’s what to do instead:</p>
<pre><code class="language-ts">const envValidationSchema = zod.object({
  SESSION_SECRET: zod.string().nonempty(),
});

// Throw on-load if missing
const config = envValidationSchema;
// from here on config.SESSION_SECRET is populated
</code></pre>
<p>The same goes for all other similar instances. Initializing database/SMTP connections, etc</p>
<pre><code class="language-ts">const envValidationSchema = zod.object({
  SMTP_HOST: zod.string().nonempty(),
  SMTP_SECURE: zod.coerce.boolean(),
  SMTP_USER: zod.string().min(2),
  SMTP_PASSWORD: zod.string().min(8),
  SMTP_PORT: zod.coerce.number(),
  //
  EMAIL_FROM: zod.string().email(),
});

// Throw on-load if missing
const config = envValidationSchema.parse(process.env);

const transporter = createTransport({
  host: config.SMTP_HOST,
  port: config.SMTP_PORT,
  secure: config.SMTP_SECURE,
  auth: {
    user: config.SMTP_USER,
    pass: config.SMTP_PASSWORD,
  },
  pool: true,
});
</code></pre>
<p>You can colocate this in a single <code>env.server.ts</code> file, but whatever you do, parse your env variables.</p>
<h2>Feature organization &amp; business logic</h2>
<p>Here’s how I organize my projects (so far)</p>
<pre><code class="language-sh">/modules
  /domain
    # example
    - membership-invitation.ts
    - membership.ts
    # ..
    - org.ts
    - user.ts
  /services
    # example (a bit mouthful, might need to rethink this)
    - invitation-authorization-service.server.ts
  /use-cases
    # example
    /create-user
      - create-user.server.ts
      - validation.server.ts
/routes
  # all public routes
  /_auth
    # example
    /register
      - _route.tsx
      - action.server.ts
      - register-form.tsx
      - register-page.tsx
  # all protected routes
  /_dashboard
  - &amp;.tsx
</code></pre>
<p>I don’t like these oversimplified examples where you do everything in your actions. Take that business logic, and create a dedicated file for it. I like to group them under <code>modules/use-cases</code>. There are many approaches, you can go wild with DDD, whatever - just don’t blindly add everything to the loaders.</p>
<p>As for the front end, I try to keep my component library in a different (PNPM) workspace. One-offs can live inside the route, while exceptions (e.g. <code>&lt;TeamSwitcher /&gt;</code>, <code>&lt;UserNav /&gt;</code>) can go in a <code>/components</code> folder.</p>
<p>Oh, almost forgot. I like to colocate everything I need inside the same route. A single folder <code>_route.tsx</code> re-exports everything (meta, default-page, loader, action). I’m writing this piece before Remix V2 lands, so I’m using the <a href="https://github.com/kiliman/remix-flat-routes">Remix Flat Routes</a> package to make it happen.</p>
<blockquote>I’m not dogmatic. I’ll happily throw everything out of the window if it’s slowing me down. For now, this setup works, but will change in the future.</blockquote>
<h2>Final thoughts</h2>
<p>I want a couple of things from my SSR framework:</p>
<ol>
<li>Let me run my business logic on the server. Don’t be too smart about it.</li>
<li>Make the transition to React seamless.</li>
</ol>
<p>So far, Remix covers all. There are sharp edges, but nothing blocking.</p>
<p>Is it a batteries-included framework like Laravel, Rails, or (the closest thing in the Node.js land) Adonis.js?</p>
<p>Nope, and I don’t mind. If anything the JS ecosystem has so many tools and options, that I would dislike having certain libraries shoehorned.</p>
<p>Remix gets out of my way and lets me just write code. It has the right amount of conventions, without being restricting or weird.</p>
<blockquote>Resources:<ul>
<li><a href="https://remix.run/docs/en/main">Remix docs</a></li>
<li><a href="https://remix.guide/">Remix Guide (content aggregator)</a></li>
<li><a href="https://afloat.dev/posts/remix-mental-model">How to Think About Remix (afloat.dev)</a></li>
<li><a href="https://github.com/kiliman/remix-flat-routes">remix-flat-routes (GitHub)</a></li>
<li><a href="https://github.com/kiliman/remix-typedjson">remix-typed-json (GitHub)</a></li>
<li><a href="https://sergiodxa.com/articles/throwing-vs-returning-responses-in-remix">Throwing vs. Returning responses in Remix (sergiodxa.com)</a></li>
<li><a href="https://www.jacobparis.com/content/remix-custom-routes">Colocate your routes into feature folders with Remix Custom Routes (jacobparis.com)</a></li>
<li><a href="https://www.jacobparis.com/content/type-safe-env">Typesafe environment variables with Zod (jacobparis.com)</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Building a browser extension]]></title>
            <link>https://dnlytras.com/blog/building-browser-extension</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/building-browser-extension</guid>
            <pubDate>Sat, 11 Feb 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Everything I learned building an extension for Chromium browsers]]></description>
            <content:encoded><![CDATA[<link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/v1675781730/blog-images/posts/building-browser-extension/content-script-setup_terc19.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/v1675781719/blog-images/posts/building-browser-extension/cookie-flow_ktt2ou.png"/><p>Recently I worked on building a browser extension. It was unexplored territory for me, and the road was a bit bumpy. I wanted to share some of my findings, as most of the Stack Overflow discussions I found were outdated or slightly misleading. Especially with the transition from Manifest v2 to Manifest v3.</p>
<blockquote>My target was Chromium-based browsers, so I can’t speak for Firefox or Safari. Although Firefox supports the same WebExtensions APIs, I haven’t tested it.</blockquote>
<h2>Stack</h2>
<p>Here’s a quick overview of the technologies I used:</p>
<ul>
<li><code>Lit</code>: Perfect fit this use-case. It’s a lightweight library that allows you to write Web Components with ease.</li>
<li><code>TypeScript</code>: No-brainer really. <code>chrome-types</code> were a massive help.</li>
<li><code>Storybook</code>: Mandatory for development if you start coupling your app with browser extension APIs</li>
<li><code>Vite</code>: Vite was used for two purposes. To provide a preview environment where I could test my extension, and for bundling the extension in library mode (only JS output, no HTML).</li>
<li><code>Vitest</code>: Super fast, no-config test runner. If you have used Jest with TypeScript you probably understand how annoying it is. Vitest just works.</li>
</ul>
<h2>Types of browser extensions</h2>
<p>There are a few different options when building an extension.</p>
<p>First, you can build a &quot;Popup&quot; extension. Something like &quot;Save to Notion&quot;. These are small web apps that load in the sandbox environment of a popup window, located in the extensions toolbar. You can use all of the browser extension APIs available, and you have great flexibility. If you want to bring in a framework like React, you can do it without any issues. It’s an isolated environment, where you can go wild.</p>
<blockquote>
<p>Browser extension APIs are about accessing the state of Tabs, Cookies, Bookmarks, and more. Here’s an overview of the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Browser_support_for_JavaScript_APIs">available APIs</a>.</p>
</blockquote>
<p>Second, you can build an extension that simply injects a script (called content scripts) into the pages of your choosing. There you can manipulate the DOM. A common use case is something like Fantasy Football extensions, which just add more metadata, like points, and schedules to the usually boring official pages. In that case, things get trickier. You can’t use the Browser APIs directly. You have to rely on a service worker to do the heavy lifting for you.</p>
<p>Also, the content script is loaded in the context of the page, so it’s probably not a good idea to bring in a framework like <code>React</code>.</p>
<p>Lastly, you can have a combination of both, something like Grammarly or 1Password. You can have both your content script, but also a popup where you can prompt the user to tweak their settings</p>
<p>In my case, I had to create an inline call-to-action button, that opens a sidebar, so using the first popup approach wasn’t viable. Everything that I’ll be sharing here is based on the second approach.</p>
<h2>React, Lit &amp; conflicting styles</h2>
<p>Right off the bat, I was thinking of building a <code>React</code>/<code>Tailwind</code> application. I quickly realized that since I can’t use the popup route, I wouldn’t run my application in a sandbox environment. I didn’t want to load <code>React</code> (maybe a second version of it, if the page already does), or bother with conflicting styles.</p>
<blockquote>
<p><code>Tailwind</code> allows you to set a custom prefix. It helps with the same problem, albeit without 100% safety.</p>
</blockquote>
<p>There’s one elegant solution that handles both, and that&#x27;s Web Components. Thankfully there’s <code>Lit</code> where you can hit the ground running without any boilerplate. For a newbie to Web Components like me, it helped immensely.</p>
<p>Let’s see a quick example of how a LitElement looks.</p>
<p>Here we have a simple component that handles the sidebar positioning. It doesn’t have any other responsibility. And by using composition (<a href="https://lit.dev/docs/components/shadow-dom/#using-the-slot-element">slots</a>) we render the content. Similar to what we would use <code>{children}</code> for in <code>React</code></p>
<pre><code class="language-ts">import {css, html, LitElement} from &#x27;lit&#x27;;
import {customElement, property} from &#x27;lit/decorators.js&#x27;;

import {noop} from &#x27;../utils/noop&#x27;;
import {icons} from &#x27;./icons&#x27;;

@customElement(&#x27;sidebar-container&#x27;)
export class SidebarContainer extends LitElement {
  static styles = css`
    :host {
      /* */
    }
    .sidebar {
      /* */
    }
    .content {
      /* */
    }
    .close-button {
      /* */
    }
    .header {
      /* */
    }
    .list {
      /* */
    }
  `;

  @property({type: Function})
  onClose: VoidFunction = noop;

  render() {
    return html`&lt;div class=&quot;sidebar&quot;&gt;
      &lt;div class=&quot;content&quot;&gt;
        &lt;div class=&quot;header&quot;&gt;
          &lt;button @click=${this.onClose} class=&quot;close-button&quot;&gt;
            ${icons.xCircle}
          &lt;/button&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;ul class=&quot;list&quot;&gt;
            &lt;slot&gt;&lt;/slot&gt;
          &lt;/ul&gt;
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;`;
  }
}
</code></pre>
<p>And here’s an idea of how it would be used. You can pass reactive props, callbacks, or slotted elements, giving you all the tools you need.</p>
<pre><code class="language-ts">switch (this.state.status) {
  case &#x27;loading&#x27;:
    return html`&lt;loading-state&gt;&lt;/loading-state&gt;`;
  case &#x27;unauthenticated&#x27;:
    return html`&lt;unauthenticated-state&gt;&lt;/unauthenticated-state&gt;`;
  case &#x27;error&#x27;:
    return html`&lt;error-state&gt;&lt;/error-state&gt;`;
  case &#x27;empty&#x27;:
    return html`&lt;empty-state&gt;&lt;/empty-state&gt;`;
  case &#x27;ready&#x27;:
    return html`&lt;sidebar-content .onClick=${this.handleClose}&gt;
      ${this.data.map(
        (entry) =&gt; html`&lt;li&gt;
          &lt;my-sidebar-row .entry=${entry}&gt;&lt;/my-sidebar-row&gt;
        &lt;/li&gt;`
      )}&gt;&lt;/sidebar-content
    &gt;`;
}
</code></pre>
<p>Honestly, Web Components are the best option when writing browser extensions (of that kind at least). They&#x27;re lightweight, well supported, solve the problem of conflicting styles, and they&#x27;re easy to use.</p>
<p>I’m planning to write more about them in a future post, as I’m still learning about them. For now, I’ll leave you with a <a href="https://lit.dev/docs/getting-started/">link to the Lit documentation</a>.</p>
<h2>Posting messages from the content script to service worker</h2>
<p>The main point of my application was to fetch data from a remote server. Unfortunately, we can’t just use <code>fetch</code> from our injected script. We&#x27;ll be slapped in the face by a CORS error. We have to create a service worker to do this for us.</p>
<p>First, we need to let our service-worker know that we want to fetch some data. To communicate with the service worker from the content script, we have to use the <code>chrome.runtime.sendMessage</code> API.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/v1675781730/blog-images/posts/building-browser-extension/content-script-setup_terc19.png" alt="image"/></p>
<p>In the following example, I’m asking the service-worker to fetch some data the moment the component gets added to the DOM.</p>
<pre><code class="language-ts">@customElement(&#x27;some-root-component&#x27;)
export class SomeRootComponent extends LitElement {
  // ...

  connectedCallback() {
    super.connectedCallback();

    // On load, kindly ask the service-worker to fetch the data
    chrome.runtime.sendMessage(
      {type: &#x27;fetchSomething&#x27;, payload: {id: this.someId}},
      // and assign a callback to run when the service-worker responds
      (response: RequestState&lt;MyResponse&gt;) =&gt; {
        this._doSomething(response);
      }
    );
  }
}
</code></pre>
<p>And then we have the service worker lurking, waiting for the message.</p>
<pre><code class="language-ts">import {MessageType} from &#x27;../types&#x27;;
import {requests} from &#x27;./requests&#x27;;

type MessageSender = chrome.runtime.MessageSender;

// Listen on messages from the content script
chrome.runtime.onMessage.addListener(function (
  message: MessageType,
  _: MessageSender,
  sendResponse: (response: unknown) =&gt; void
) {
  if (message.type === &#x27;fetchSomething&#x27;) {
    requests
      .fetchSomething(message.payload.id)
      .then((response) =&gt; {
        sendResponse(response);
      })
      .catch(() =&gt; {
        sendResponse({status: &#x27;error&#x27;, error: &#x27;UNKNOWN_ERROR&#x27;});
      });
  }
  return true; // respond async
});
</code></pre>
<p>The message can be anything. Maybe we ask to create a bookmark. It depends on the use case. All it matters is that for all messages, we have a handler in the service worker.</p>
<h2>CORS &amp; consistent extension key</h2>
<p>Now let’s talk about CORS. To allow CORS requests I had to make two tweaks:</p>
<ol>
<li>Explicitly state the remote-server URL in my manifest (under <code>host_permissions</code> in my <code>manifest.json</code>)</li>
<li>Whitelist the extension in my CORS config on my server</li>
</ol>
<p>Here’s the problem. Every time you load an unpacked application, a new id is assigned to your extension. If you have beta users, or your users are side-loading the extension, you don’t have a single origin to whitelist. The solution is assigning a unique extension id (<code>key</code> in <code>manifest.json</code>) to your extension.</p>
<p>Now to ensure that this is unique and no other extension will share the same key, we have to create a draft entry in the Chrome web store (and/or Edge store). <strong>Even if you don’t plan to publish your extension</strong>, you have to do this.</p>
<p>After you&#x27;ve created a draft entry, we add the key to our <code>manifest.json</code> and we&#x27;re good to go. Every-time you remove and re-add the extension, it will have the same id. This way we can safely whitelist a single <code>chrome-extension://</code> URI.</p>
<p><a href="https://developer.chrome.com/docs/extensions/mv3/manifest/key/">Here’s more about this approach</a></p>
<blockquote>The same applies for the Edge store. In my extension I produce two builds <code>extensions/edge</code> &amp; <code>extensions/chrome</code> each with different extension-ids in the <code>manifest.json</code>. This way I whitelist only these two origins in my backend.</blockquote>
<h2>Cookies &amp; Authentication</h2>
<p>Now, here’s a question. How do you authenticate your user? We assume that we make a call to our server, but how do we ensure that the user is authenticated?</p>
<p>One approach is to prompt them to fill in their credentials. But that&#x27;s not great UX. In my case, I had to use the same authentication flow as the main application. If a user is authenticated in the application, I want to reuse their cookie. I don’t want to prompt them to log in again, just to use the extension.</p>
<p>Turns out there’s a way to do this. First, you need to add the <code>cookies</code> permission in the <code>manifest.json</code> file, then ensure that your application&#x27;s domain exists in the <code>host_permissions</code> array.</p>
<pre><code class="language-json">{
  &quot;name&quot;: &quot;My Extension&quot;,
  &quot;version&quot;: &quot;1.0&quot;,
  &quot;manifest_version&quot;: 3,
  &quot;description&quot;: &quot;My Extension&quot;,
  &quot;permissions&quot;: [&quot;cookies&quot;],
  &quot;host_permissions&quot;: [&quot;https://my-website.com&quot;]
}
</code></pre>
<p>Now if you include cookies in your <code>fetch</code>/<code>axios</code> requests, they will be sent to the server. And if the user is authenticated, you&#x27;ll get the same response as if you were using the application directly. This is a great way to reuse your authentication flow, without re-inventing the wheel. I was honestly surprised that this was possible.</p>
<h2>Posting messages from the service worker to the content scripts</h2>
<p>Alright, let’s do the reverse now. How do you send a message from the service-worker to the content-script? Well, you can’t.</p>
<p>Each tab will have its content-script. So if you want to send a message to a specific tab, you need to know the tab&#x27;s id. And to get that piece of info, we need access to the <code>tabs</code> permission.</p>
<p>Check again our lovely example if you’re confused.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/v1675781730/blog-images/posts/building-browser-extension/content-script-setup_terc19.png" alt="image"/></p>
<p>Alright, let’s add the <code>tabs</code> permission to our <code>manifest.json</code> file.</p>
<pre><code class="language-json">{
  &quot;name&quot;: &quot;My Extension&quot;,
  &quot;version&quot;: &quot;1.0&quot;,
  &quot;manifest_version&quot;: 3,
  &quot;description&quot;: &quot;My Extension&quot;,
  &quot;permissions&quot;: [&quot;cookies&quot;, &quot;tabs&quot;],
  &quot;host_permissions&quot;: [&quot;https://my-website.com&quot;]
}
</code></pre>
<p>Now, we can use the <code>chrome.tabs.query</code> API to query for any tabs that match our filters. And then we can use the <code>chrome.tabs.sendMessage</code> API to send a message to the content-scripts of those tabs.</p>
<p>Here’s a little helper function that will send a message to all the tabs that match a specific URL pattern. It’s a bit naive, but you get the idea.</p>
<pre><code class="language-ts">function notifyAllContentScripts(message: MessageType, urlPattern: string) {
  chrome.tabs.query({url: urlPattern}, function (tabs) {
    for (const tab of tabs) {
      if (tab.id) {
        // send the message, ignore the callback
        chrome.tabs.sendMessage(tab.id, message, () =&gt; undefined);
      }
    }
  });
}
</code></pre>
<p>And finally, the component has to attach a listener to accept the message. Note that the <code>chrome.runtime.onMessage</code> &amp;&amp; <code>chrome.runtime.sendMessage</code> are the only API&#x27;s available to the content script.</p>
<pre><code class="language-ts">@customElement(&#x27;some-root-component&#x27;)
export class SomeRootComponent extends LitElement {
  // ...

  connectedCallback() {
    super.connectedCallback();

    chrome.runtime.onMessage.addListener((request: MessageType) =&gt; {
      if (request.type === &#x27;some-action&#x27;) {
        this._doSomething();
      }
    });
  }
}
</code></pre>
<h2>Listening to cookies change</h2>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/v1675781719/blog-images/posts/building-browser-extension/cookie-flow_ktt2ou.png" alt="image"/></p>
<p>Now let’s combine some of the previous sections.</p>
<ol>
<li>We can reuse the cookies</li>
<li>We can send messages from the content script to the service worker</li>
<li>We can notify all content scripts from the service worker</li>
</ol>
<p>So, imagine that the user is initially logged out, and we prompt them to log in. How would we implement a feature, where we would try to fetch data, after a successful login on another tab?</p>
<p>Thankfully there’s the <a href="https://developer.chrome.com/docs/extensions/reference/cookies/#event-onChanged"><code>cookies.onChanged</code> event</a> that will notify us when a cookie changes.</p>
<blockquote>Unfortunately I&#x27;ve found that the event is spamming quite a bit (maybe it’s my login flow code). So I&#x27;ve added a debounce function to only keep the last call.</blockquote>
<pre><code class="language-ts">function onCookieChange(changeInfo: chrome.cookies.CookieChangeInfo) {
  // Cookie was added - weird API design
  if (!changeInfo.removed) {
    if (changeInfo.cookie.domain.includes(appHostname)) {
      notifyContentScripts({type: &#x27;refreshArticle&#x27;}, myTargetUrlPattern);
    }
  }
}

// Debounce the function to avoid spamming the content scripts
chrome.cookies.onChanged.addListener(debounce(onCookieChange, 500));
</code></pre>
<p>Then we notify the appropriate content scripts, and they can do whatever they want. They can post back and ask the service worker to do something, or they can do some logic themselves.</p>
<h2>Custom fonts</h2>
<p>Ok, enough with the data flow. Let’s add some custom fonts.</p>
<p>I read various approaches online, but I found that creating a font-script programmatically works just fine.</p>
<pre><code class="language-ts">const gf = document.createElement(&#x27;link&#x27;);
gf.href =
  &#x27;https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500;700&amp;display=swap&#x27;;
gf.rel = &#x27;stylesheet&#x27;;
document.body.appendChild(gf);
</code></pre>
<p>Maybe there’s a strong reason to use <code>web_accessible_resources</code> instead, so I’ll keep an eye on that. For now, if the user blocks Google Fonts, I’m happy with system fonts.</p>
<h2>Reacting to outside events</h2>
<p>One requirement I had was to react to outside events. For example, if the user clicks on another DOM element, I want to close my extension. I used this very simple snippet.</p>
<pre><code class="language-ts">@customElement(&#x27;some-root-component&#x27;)
export class SomeRootComponent extends LitElement {
  connectedCallback() {
    super.connectedCallback();
    // ... more
    window.addEventListener(&#x27;click&#x27;, this._handleClickOutside);
  }

  disconnectedCallback() {
    window.removeEventListener(&#x27;click&#x27;, this._handleClickOutside);
    super.disconnectedCallback();
  }

  private _handleClickOutside = (event: Event) =&gt; {
    if (!event.composedPath().includes(this)) {
      this._handleSidebarVisibility(false);
    }
  };
}
</code></pre>
<p>I’m unsure if attaching a listener with <code>window</code> is the best approach, so I’m still in the lookout for a better solution.</p>
<h2>Development</h2>
<p>As for development and testing, I went with <code>Vite</code>, <code>Vitest</code> &amp; <code>Storybook</code>. <code>Storybook</code> is mandatory as the moment you add Browser extension APIs, you break <code>Vite</code>&#x27;s development mode. <code>Storybook</code> supports <code>Lit</code> so it just works nicely. <a href="https://storybook.js.org/docs/web-components/get-started/whats-a-story">Here’s the documentation on writing Lit elements in Storybook</a>.</p>
<p>As for my very light <code>Vite</code> config, there’s nothing special. Just picking the right entry points.</p>
<pre><code class="language-ts">/// &lt;reference types=&quot;vitest&quot; /&gt;
import {defineConfig} from &#x27;vite&#x27;;

// https://vitejs.dev/config/
export default defineConfig({
  test: {
    globals: true,
  },
  build: {
    lib: {
      entry: [&#x27;src/client/main.ts&#x27;, &#x27;src/server/service-worker.ts&#x27;],
      formats: [&#x27;es&#x27;],
      name: &#x27;MyExtension&#x27;,
    },
  },
});
</code></pre>
<p>And here’s the full content script entry point. The whole logic is in the <code>some-root-component.ts</code> file. The <code>React</code> <code>App.ts</code> equivalent.</p>
<pre><code class="language-ts">// Import the polyfills :(
import &#x27;@webcomponents/custom-elements&#x27;;
// Import the web components
// import &#x27;./components/a.ts&#x27;;
// import &#x27;./components/b.ts&#x27;;
// import &#x27;./components/c.ts&#x27;; etc...
import &#x27;./components/some-root-component&#x27;;

// Add to the page
document.body.appendChild(document.createElement(&#x27;some-root-component&#x27;));

// Load the fonts
const gf = document.createElement(&#x27;link&#x27;);
gf.href =
  &#x27;https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500;700&amp;display=swap&#x27;;
gf.rel = &#x27;stylesheet&#x27;;
document.body.appendChild(gf);
</code></pre>
<h2>Bundling</h2>
<p>My final output is two folders, one <code>extensions/chome</code> and one <code>extensions/edge</code>. I’ll omit posting my build script, but here’s the gist of it:</p>
<ol>
<li>Build the <code>Vite</code> project, keep it in a temp folder</li>
<li>Create two folders, one for Chrome and one for Edge</li>
<li>Copy the <code>Vite</code> output to both folders</li>
<li>Build the <code>manifest.json</code> file, merging my base config with the Chrome specific config</li>
<li>Move it to the Chrome folder</li>
<li>Build the <code>manifest.json</code> file, merging my base config with the Edge specific config</li>
<li>Move it to the Edge folder</li>
</ol>
<h2>Publishing</h2>
<p>I&#x27;ve used the <a href="https://chrome.google.com/webstore/developer/dashboard">Chrome Developer Dashboard</a> to publish the extension. It’s a simple process, but it will set you back 5$. The whole reviewing process took about 2 days, so no complaints there.</p>
<h2>Final thoughts</h2>
<p>Building a browser extension isn’t as intimidating as I thought. It has its quirks, and some unknowns due to the introduction of Manifest v3, but nothing that can’t be overcome.</p>
<p>I specifically am proud of using Web Components and <code>Lit</code> on this project. It has been on my radar for so long, but never had the chance to use it. One possible extension would be to use <a href="https://shoelace.style">Shoelace</a> for styling, but I’ll leave that for another day.</p>
<p>So there’s that. A simple browser extension, built and published in a few days. I hope you found it useful.</p>
<blockquote>Resources:<ul>
<li><a href="https://developer.chrome.com/docs/extensions/mv3/getstarted/">Chrome Extensions Documentation</a></li>
<li><a href="https://lit.dev/docs/getting-started/">Lit Documentation</a></li>
<li><a href="https://developer.chrome.com/docs/extensions/mv3/manifest/key/">Generating a constant extension Key</a></li>
<li><a href="https://css-tricks.com/how-to-transition-to-manifest-v3-for-chrome-extensions/">CSS Tricks - How to transition to Manifest v3</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Rewriting my blog with Next.js]]></title>
            <link>https://dnlytras.com/blog/rewriting-with-next</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/rewriting-with-next</guid>
            <pubDate>Tue, 15 Nov 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[How I moved this website from Gatsby to Next.js]]></description>
            <content:encoded><![CDATA[<blockquote><p><strong>Probably not what you&#x27;re looking for</strong></p><p>This was written on <strong>2022</strong>, targeting <strong>Next.js 13</strong> &amp; the <strong>Pages router</strong>.
Since then I have made a few modifications and moved everything to RSC. If you&#x27;re looking for help, I can share with you my solutions, but they don&#x27;t warrant a full blog post. RSS specifically is a pain point.</p></blockquote>
<blockquote>TL;DR:<ul>
<li>Gatsby is excellent for static sites. I don’t like it for dynamic sites, and its workarounds are not ideal.</li>
<li>Next.js is excellent too, but recreating the same functionality (Markdown pipeline, RSS, Image optimizations) as Gatsby is a pain.</li>
<li>Next.js has the market share, and the backing of the React core team, so it should be future-proof.</li>
<li>The current stack is Next.js, Tailwind, Floating-UI, MDX, Cloudinary, and Netlify.</li>
<li>I&#x27;ve introduced some fun new features, like better snippets, static tweets, and more.</li>
<li>Overall the rewrite was a success, I’m happy with the result but it’s far from perfect.</li>
</ul></blockquote>
<h2>Moving on from Gatsby</h2>
<p>I&#x27;ve been using Gatsby since v0, and it’s <strong>perfect</strong> for my use case. That said I had a few reasons for moving on.</p>
<ol>
<li>I mostly work with Next.js. My only Gatsby project is my blog and I don’t like the context-switching.</li>
<li>There’s nothing that interests me in the latest Gatsby releases. For example in Gatsby 5, the two big changes are Partial Hydration and the Slices API.</li>
<li>I just don’t like the GraphQL layer and its plugin ecosystem.</li>
</ol>
<p>I can’t stress enough the last point. I wanted to use MDX in my Gatsby blog, and I was debugging some GraphQL weirdness that pushed me to rewrite it all with Next.js.</p>
<h2>Rewriting with Next.js</h2>
<p>I wanted to pick a framework that supports SSR out of the box. I was torn between Remix &amp; Next.js but decided to go for the more &quot;mature&quot; Next.js. Had I seen the Shopify acquisition news earlier, I might have picked Remix.</p>
<p>I was also a bit hyped by the <a href="https://nextjs.org/blog/next-13">Next.js 13</a> release and wanted to try out the new features. My enthusiasm didn’t last long, as I realized it was a paper release, and the new features were still in beta or experimental. Anyhow, I decided to stick with it, use the <code>/pages</code> folder for now and see how it goes.</p>
<h2>Next.js on Netlify</h2>
<p>I have various projects and they all live on Netlify. Even if Vercel is a better fit for Next.js, I’m not going to change hosting providers. I’m also a bit stubborn, and I don’t like vendor lock-in. If Next.js starts to become problematic to deploy elsewhere, I’ll move to Remix.</p>
<p>So far the experience has been smooth. CSP headers were not applying correctly, but I fixed it by adding a <code>netlify.toml</code> file. I also cached my fonts and static assets, something Vercel supposedly does out of the box.</p>
<p>Here’s my config so far:</p>
<pre><code class="language-toml">[[headers]]
  for = &quot;/*&quot;
  [headers.values]
    X-Frame-Options = &quot;DENY&quot;
    X-XSS-Protection = &quot;1; mode=block&quot;
    Content-Security-Policy = &#x27;&#x27;&#x27;
    default-src &#x27;self&#x27;;
    script-src &#x27;self&#x27; &#x27;unsafe-eval&#x27; &#x27;unsafe-inline&#x27;;
    child-src codesandbox.io;
    style-src &#x27;self&#x27; &#x27;unsafe-inline&#x27;;
    img-src &#x27;self&#x27; blob: data: *.cloudinary.com pbs.twimg.com i.scdn.co;
    media-src &#x27;none&#x27;;
    connect-src *;
    font-src &#x27;self&#x27;;&#x27;&#x27;&#x27;
    Referrer-Policy = &quot;no-referrer&quot;
    X-Content-Type-Options = &quot;nosniff&quot;
    X-DNS-Prefetch-Control = &quot;off&quot;
    Strict-Transport-Security = &quot;max-age=31536000; includeSubDomains; preload&quot;
    Permissions-Policy = &quot;camera=(), microphone=(), geolocation=()&quot;

[[headers]]
  for = &quot;/fonts/*&quot;
    [headers.values]
      cache-control = &#x27;public, max-age=31536000, immutable&#x27;

[[headers]]
  for = &quot;/_next/static/*&quot;
    [headers.values]
      cache-control = &#x27;public, max-age=31536000, immutable&#x27;
</code></pre>
<h2>Refactoring the code</h2>
<p>This list is not exhaustive, but it’s a good summary of the changes I made. The snippets I’ll post are meant to provide some general idea, and might not show the whole implementation. I’ll try to keep it updated as I make more changes.</p>
<h3>1. Updating pages and components</h3>
<p>I decided to start a new project from scratch and add TypeScript. Thankfully, I had abstracted most of the Gatsby-related stuff, so it was painless moving everything to their new folders.</p>
<pre><code class="language-tsx">export default function GatsbyBlogPage() {
  const postsByYear = useGroupedPosts();

  return (
    &lt;Layout&gt;
      &lt;Container&gt;
        &lt;Cover
          title=&quot;Blog&quot;
          text=&quot;Writing about coding, life, productivity &amp; more&quot;
        /&gt;
        &lt;PostGrid postsByYear={postsByYear} /&gt;
      &lt;/Container&gt;
    &lt;/Layout&gt;
  );
}

export function Head() {
  return (
    &lt;MetaTags
      title=&quot;Blog&quot;
      path=&quot;/blog&quot;
      description=&quot;Writing about coding, life, productivity &amp; more&quot;
    /&gt;
  );
}
</code></pre>
<pre><code class="language-tsx">export default function NextBlogPage({
  postsByYear,
}: {
  postsByYear: GroupedPosts;
}) {
  return (
    &lt;&gt;
      &lt;MetaTags
        title=&quot;Blog&quot;
        path=&quot;/blog&quot;
        description=&quot;Writing about coding, life, productivity &amp; more&quot;
      /&gt;
      &lt;Layout&gt;
        &lt;Container&gt;
          &lt;Cover
            title=&quot;Blog&quot;
            text=&quot;Writing about coding, life, productivity &amp; more&quot;
          /&gt;
          &lt;PostGrid postsByYear={postsByYear} /&gt;
        &lt;/Container&gt;
      &lt;/Layout&gt;
    &lt;/&gt;
  );
}

export async function getStaticProps() {
  const postsByYear = getAllGroupedPosts();
  return {props: {postsByYear}};
}
</code></pre>
<h3>2. Updating navigation</h3>
<p>My first issue was replacing the very clever <code>&lt;GatsbyLink /&gt;</code>. Unfortunately, Next.js needs some help with highlighting the active navigation link. I also had to update the prefetch functionality, so that linked pages were fetched only on hover. Otherwise, it was a simple change.</p>
<pre><code class="language-js">import React from &#x27;react&#x27;;
import {Link as GatsbyLink} from &#x27;gatsby&#x27;;

export function Link({children, to, ...props}) {
  return (
    &lt;GatsbyLink to={to} {...props}&gt;
      {children}
    &lt;/GatsbyLink&gt;
  );
}
</code></pre>
<pre><code class="language-tsx">import NextLink, {LinkProps} from &#x27;next/link&#x27;;
import {useRouter} from &#x27;next/router&#x27;;

type ActiveLinkProps = LinkProps &amp; {
  activeClassName?: string;
  className?: string;
};

export function Link({
  href,
  children,
  className: targetClassName = &#x27;&#x27;,
  activeClassName,
  prefetch = false,
  ...props
}: React.PropsWithChildren&lt;ActiveLinkProps&gt;) {
  const {asPath} = useRouter();
  const className =
    targetClassName +
    (asPath === href &amp;&amp; activeClassName ? &#x27; &#x27; + activeClassName : &#x27;&#x27;);

  return (
    &lt;NextLink href={href} {...props} className={className} prefetch={prefetch}&gt;
      {children}
    &lt;/NextLink&gt;
  );
}
</code></pre>
<h3>3. Nit changes</h3>
<p>There were some smaller changes, like font loading, analytics configuration, and CSS loading. In short, whatever goes in <code>gatsby-browser.js</code> in Gatsby, goes in <code>pages/_app.js</code> in Next.js.</p>
<pre><code class="language-tsx">import &#x27;@/styles/global.css&#x27;;

import {Inter} from &#x27;@next/font/google&#x27;;
import type {AppProps} from &#x27;next/app&#x27;;

import {Analytics} from &#x27;@/components/Analytics&#x27;;

const interVariable = Inter({subsets: [&#x27;latin&#x27;], display: &#x27;swap&#x27;});

export default function App({Component, pageProps: {...pageProps}}: AppProps) {
  return (
    &lt;&gt;
      &lt;Analytics /&gt;
      &lt;div className={interVariable.className}&gt;
        &lt;Component {...pageProps} /&gt;
      &lt;/div&gt;
    &lt;/&gt;
  );
}
</code></pre>
<h3>4. Dropping GatsbyImage, and using Cloudinary</h3>
<p>Boy, oh boy. <code>&lt;GatsbyImage /&gt;</code> is awesome. I really didn’t have to think about images in my Gatsby blog at all. Vercel pretty much vendor locks <code>&lt;NextImage /&gt;</code>, and I deploy on Netlify, so I decided to use Cloudinary and call it a day. I know Netlify does its best with its own Next Runtime, but I wanted to trust a dedicated service for optimizing my images. Here’s what I’m using:</p>
<pre><code class="language-tsx">import Image from &#x27;next/image&#x27;;

import {CLOUDINARY_URL} from &#x27;@/links&#x27;;

type CloudinaryImageProps = {
  src: string;
  alt: string;
  width: number;
  height: number;
  title?: string;
  className?: string;
  blurDataURL?: string;
  quality?: number;
};

type CloudinaryLoaderProps = Pick&lt;
  CloudinaryImageProps,
  &#x27;src&#x27; | &#x27;width&#x27; | &#x27;height&#x27;
&gt;;

function cloudinaryLoader({src, width, quality = 90}: CloudinaryLoaderProps) {
  return `${CLOUDINARY_URL}/w_${width},q_${quality},f_webp${src}`;
}

export function CloudinaryImage({
  src,
  alt,
  width,
  height,
  title,
  className,
  blurDataURL,
  quality,
}: CloudinaryImageProps) {
  return (
    &lt;Image
      quality={quality}
      src={src}
      loader={cloudinaryLoader}
      alt={alt}
      title={title}
      width={width}
      height={height}
      className={className}
      {...(blurDataURL ? {placeholder: &#x27;blur&#x27;, blurDataURL: blurDataURL} : {})}
    /&gt;
  );
}
</code></pre>
<p>As for the <code>blurDataURL</code>, I created a function to generate them on demand</p>
<pre><code class="language-ts">export async function getBase64Image(imageId: string): Promise&lt;string&gt; {
  const response = await fetch(
    `${CLOUDINARY_URL}/w_100/e_blur:1000,q_auto,f_webp${imageId}`
  );
  const buffer = await response.arrayBuffer();
  const data = Buffer.from(buffer).toString(&#x27;base64&#x27;);
  return `data:image/webp;base64,${data}`;
}
</code></pre>
<p>..and include them in the MDX files.</p>
<pre><code class="language-mdx">---
title: &#x27;Migrating from Gatsby to Next.js&#x27;
date: &#x27;2022-11-10&#x27;
description: &#x27;Documenting website rewrite with Next.js&#x27;
category: &#x27;React&#x27;
featuredImage: &#x27;/v1667896248/blog-images/covers/gatsby-to-next.jpg&#x27;
blurHash: &#x27;data:image/webp;base64,UklGRi4BAABXRUJQVlA4ICIBAACQCgCdASpkAEMAPrFEmEopKqIhtBuacVAWCWctgApEhfgL6cVs+EP+L7eCBMs6WxuBdNhnrhgZBc3o3T07pyMmL/33nE0QwO5CRL7Rcmu4dPcZyTC4MBUkUest3AAA/vCwN7ITIh1amqRK9uZ9e8L+JX4epgvVOoHWyh61Nr2+j3iJ1n5+tTV1e8qSP1QjANudOvbZEuBwQa9ipePIbKgS4Q5u5hroxOh4PorEJQgLU0HpsMc8BorpBdCX9VTbR4lYvZ9cZ34FW5j9pTYfkDICNcGYDODTMFeWGroq5PG5p6X6BMdLTOLEgfKWZYzwyjObVdLf7A0FeBoPihEMEILwKlUghr/eFH8vKHeKgdhCwhCLhB1qGhoCBzA7e8c4HyNIAA==&#x27;
---

I have [written a post](/blog/state-of-blog) about my blog&#x27;s stack and why I don’t plan on rewriting it with a different framework. Turns out, this was a lie, and I rewrote it.
</code></pre>
<p>Finally, I can use the <code>&lt;CloudinaryImage /&gt;</code> component anywhere in my app, including MDX files.</p>
<pre><code class="language-tsx">export function Post({post}: {post: PostType}) {
  return (
    &lt;Link
      className=&quot;overflow-hidden h-full z-0 grid sm:grid-cols-1 ring-gray-100 ring-offset-8 hover:ring-4 rounded-lg group&quot;
      href={`/blog/${post.slug}`}
      key={post.slug}
      aria-label={`Read the article &#x27;${post.frontmatter.title}&#x27;`}
    &gt;
      &lt;div&gt;
        &lt;div className=&quot;shrink-0 relative rounded-lg overflow-hidden&quot;&gt;
          &lt;CloudinaryImage
            alt={`Cover image of &#x27;${post.frontmatter.title}&#x27;`}
            src={post.frontmatter.featuredImage}
            width={480}
            height={320}
            className=&quot;max-h-full w-full rounded-lg overflow-hidden brightness-75 object-cover&quot;
            blurDataURL={post.frontmatter.blurHash}
          /&gt;
        &lt;/div&gt;
        {/* ... */}
    &lt;/Link&gt;
  );
}
</code></pre>
<p>Could I do better? Probably. For now, it works fine and I can focus on other things. Most importantly I can focus on writing content, and if I feel inspired, build something like <a href="https://kentcdodds.com/blog/building-an-awesome-image-loading-experience#sizes-srcset-and-cloudinary">Kent C. Dodds&#x27; getImageBuilder</a>.</p>
<h3>5. Replacing Gatsby&#x27;s Markdown plugins, and introducing MDX</h3>
<p>Here’s the juicy part. I wanted to create some custom components in my markdown files. Essentially I had to replicate the functionality of these plugins:</p>
<pre><code class="language-js">// This is not my full config, just the relevant parts
module.exports = {
  plugins: [
    `gatsby-transformer-json`,
    `gatsby-plugin-sharp`,
    `gatsby-transformer-sharp`,
    `gatsby-plugin-image`,
    `gatsby-plugin-twitter`,
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        path: `${__dirname}/src/content/`,
        name: &#x27;data&#x27;,
      },
    },
    {
      resolve: `gatsby-transformer-remark`,
      options: {
        plugins: [
          {
            resolve: `gatsby-remark-autolink-headers`,
            options: {
              offsetY: `100`,
              icon: `&lt;svg aria-hidden=&quot;true&quot; height=&quot;20&quot; version=&quot;1.1&quot; viewBox=&quot;0 0 16 16&quot; width=&quot;20&quot;&gt;&lt;path fill=&#x27;#2A506F&#x27; fill-rule=&quot;evenodd&quot; d=&quot;M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z&quot;&gt;&lt;/path&gt;&lt;/svg&gt;`,
              maintainCase: true,
            },
          },
          {
            resolve: `gatsby-remark-vscode`,
            options: {
              theme: `GitHub Dark`,
              extensions: [
                path.resolve(__dirname, &#x27;./github-vscode-theme.zip&#x27;),
              ],
            },
          },
          `gatsby-remark-copy-linked-files`,
          `gatsby-remark-external-links`,
          `gatsby-remark-responsive-iframe`,
          {
            resolve: `gatsby-remark-images`,
            options: {
              maxWidth: 900,
              quality: 100,
              withWebp: true,
            },
          },
        ],
      },
    },
    {
      resolve: `gatsby-plugin-feed`,
      options: {},
    },
  ],
};
</code></pre>
<p>Alright, first I created a <code>/lib/posts.ts</code> file with all the relevant functions to get the posts. I’ll spare you most of the implementation details. Essentially I’m reading from the file system and parsing the frontmatter and markdown body.</p>
<pre><code class="language-ts">import fs from &#x27;fs&#x27;;
import matter from &#x27;gray-matter&#x27;;
import {join} from &#x27;path&#x27;;

import {PaginatedPreviewPost, Post, PostWithMdx} from &#x27;@/types/types&#x27;;

import {mdxToHtml} from &#x27;./mdxToHtml&#x27;;

const postsDirectory = join(process.cwd(), &#x27;content/blog&#x27;);

export function getPostMetadataBySlug(slug: string): Post {}

export async function getFullPostBySlug(slug: string): PostWithMdx {
  const fullPath = join(postsDirectory, slug, &#x27;index.md&#x27;);
  const fileContents = fs.readFileSync(fullPath, &#x27;utf8&#x27;);
  const {data: frontmatter, content} = matter(fileContents);

  const mdxContent = await mdxToHtml(content);

  return {slug, frontmatter, content: mdxContent};
}

export function getAllPosts(): Array&lt;Post&gt; {}

export function getLatestPosts(numberOfPosts = 4): Array&lt;Post&gt; {}

export function getAllGroupedPosts(): Record&lt;number, Array&lt;Post&gt;&gt; {}

export function getPrevAndNextPosts(slug: string) {}
</code></pre>
<p>As for the MDX transformations, I’m using <code>next-mdx-remote</code> with a bunch of <code>rehype</code> plugins. Pretty much what Gatsby does under the hood.</p>
<pre><code class="language-ts">import {serialize} from &#x27;next-mdx-remote/serialize&#x27;;
import readingTime from &#x27;reading-time&#x27;;
import {rehypeAccessibleEmojis} from &#x27;rehype-accessible-emojis&#x27;;
import rehypeAutolinkHeadings from &#x27;rehype-autolink-headings&#x27;;
import rehypePrettyCode from &#x27;rehype-pretty-code&#x27;;
import rehypeSlug from &#x27;rehype-slug&#x27;;
import remarkGfm from &#x27;remark-gfm&#x27;;

import {MdxContent} from &#x27;@/types/types&#x27;;

import {shikiOptions} from &#x27;./shiki&#x27;;

export async function mdxToHtml(source: string): Promise&lt;MdxContent&gt; {
  const mdxSource = await serialize(source, {
    mdxOptions: {
      format: &#x27;mdx&#x27;,
      remarkPlugins: [remarkGfm],
      rehypePlugins: [
        rehypeSlug,
        [rehypeAutolinkHeadings, {properties: {className: [&#x27;anchor&#x27;]}}],
        [rehypePrettyCode, shikiOptions],
        rehypeAccessibleEmojis,
      ],
    },
  });

  return {
    html: mdxSource,
    wordCount: source.split(/\s+/gu).length,
    readingTime: readingTime(source).text,
  };
}
</code></pre>
<blockquote>I’m using <a href="https://github.com/shikijs/shiki">Shiki</a> for the syntax highlighting. I tried both Prism.js &amp; Highlight.js and neither come close to it. I can use any VSCode theme, just like I was used to with <code>gatsby-remark-vscode</code>. Currently, I’m using the &quot;Tokyo Night Storm&quot; theme.</blockquote>
<blockquote>That said, I have <a href="https://github.com/ds300/patch-package">heavily patched</a> the <code>rehype-pretty-code</code> plugin. Since I’m not doing client-side highlighting, my bundle is a bit bloated, so I had fix some stuff.</blockquote>
<p>Finally, consuming the data on the pages.</p>
<pre><code class="language-tsx">import {PostCover} from &#x27;@/components/blog/PostCover&#x27;;
import {PostFooter} from &#x27;@/components/blog/PostFooter&#x27;;
import {Container} from &#x27;@/components/Container&#x27;;
import {Layout} from &#x27;@/components/Layout&#x27;;
import {MDXRenderer} from &#x27;@/components/mdx/Mdx&#x27;;
import {MetaTags} from &#x27;@/components/MetaTags&#x27;;
import {getAllPosts, getFullPostBySlug, getPrevAndNextPosts} from &#x27;@/lib/posts&#x27;;
import type {PaginatedPreviewPost, PostWithMdx} from &#x27;@/types/types&#x27;;

export default function BlogPage({
  post,
  prevPost,
  nextPost,
}: {
  post: PostWithMdx;
  prevPost: PaginatedPreviewPost;
  nextPost: PaginatedPreviewPost;
}) {
  return (
    &lt;&gt;
      &lt;MetaTags
        title={post.frontmatter.title}
        path={post.slug}
        description={post.frontmatter.description}
      /&gt;
      &lt;Layout&gt;
        &lt;Container&gt;
          &lt;PostCover post={post} /&gt;
        &lt;/Container&gt;
        &lt;article className=&quot;prose max-w-3xl mx-auto px-6 pt-6&quot;&gt;
          &lt;MDXRenderer {...post.content.html} /&gt;
        &lt;/div&gt;
        &lt;div className=&quot;max-w-3xl mx-auto px-6&quot;&gt;
          &lt;hr className=&quot;my-12 text-zinc-100&quot; /&gt;
          &lt;PostFooter prevPost={prevPost} nextPost={nextPost} /&gt;
        &lt;/div&gt;
      &lt;/Layout&gt;
    &lt;/&gt;
  );
}

export async function getStaticPaths() {
  const paths = getAllPosts();
  return {
    paths: paths.map(({slug}) =&gt; ({params: {slug}})),
    fallback: &#x27;blocking&#x27;,
  };
}

export async function getStaticProps({params}: {params: {slug: string}}) {
  const post = await getFullPostBySlug(params.slug);

  if (!post) {
    return {notFound: true};
  }

  const {prevPost, nextPost} = getPrevAndNextPosts(post.slug);

  return {
    props: {
      post: {
        ...post,
        frontmatter: {
          ...post.frontmatter,
          featuredImage: &#x27;&#x27;,
          blurHash: &#x27;&#x27;,
        },
      },
      prevPost,
      nextPost,
    },
  };
}
</code></pre>
<h3>6. Building custom components</h3>
<p>The process is straightforward. Write a React component..</p>
<pre><code class="language-tsx">export function InfoBox({children}: {children: React.ReactNode}) {
  return (
    &lt;blockquote className=&quot;text-md relative border border-indigo-300 bg-indigo-50&quot;&gt;
      &lt;InformationCircleIcon className=&quot;absolute -top-3 -left-3 h-8 w-8 rounded-full bg-indigo-300 text-white&quot; /&gt;
      {children}
    &lt;/blockquote&gt;
  );
}
</code></pre>
<p>...and add it in the MDXRenderer&#x27;s list of custom components.</p>
<pre><code class="language-tsx">import type {MDXRemoteSerializeResult} from &#x27;next-mdx-remote&#x27;;
import {MDXRemote} from &#x27;next-mdx-remote&#x27;;

import {InfoBox} from &#x27;./BlockQuotes&#x27;;

export const components = {InfoBox};

export function MDXRenderer(props: MDXRemoteSerializeResult) {
  return &lt;MDXRemote {...props} components={components} /&gt;;
}
</code></pre>
<p>Then in my MDX file, I can call it as a normal React component. This makes it very easy to build fancy stuff like Charts, Code Blocks, DataTables, or some silly stuff I’m planning next.</p>
<pre><code class="language-mdx">&lt;InfoBox&gt;
  Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin ultrices
  faucibus massa, vitae placerat leo lacinia non. Integer malesuada velit in
  magna luctus sodales sed eu justo.
&lt;/InfoBox&gt;
</code></pre>
<blockquote>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin ultrices faucibus massa, vitae placerat leo lacinia non. Integer malesuada velit in magna luctus sodales sed eu justo.</blockquote>
<h3>7. RSS</h3>
<p>My RSS implementation makes me sad. At this point, I contemplated if writing my blog in Next.js makes any sense. I’m using <a href="https://j471n.in/blogs/rss-feed-for-nextjs">this approach</a>. This will be improved in the future.</p>
<h2>Final thoughts</h2>
<p>Was it worth it?</p>
<p>My previous stack wasn’t broken, so it didn’t need fixing. There isn’t any wow moment, I just had to scratch that itch. I’m happy with the result, even though it feels very hacky at various places, but I’m sure I’ll improve it moving forward.</p>
<p>If anything, I’m particularly happy with the bundle size decrease, as well as my fancy new code blocks.</p>
<p>I shouldn’t be hard on Next.js, as I’m building a blog and not a complex app. If anything I make things harder for myself by not using a CMS.</p>
<blockquote>Resources:<ul>
<li><a href="https://nextjs.org/blog/next-13">Next.js 13</a></li>
<li><a href="https://www.joshwcomeau.com/blog/how-i-built-my-blog/">How I Built My Blog (joshwcomeau.com)</a></li>
<li><a href="https://jeffjadulco.com/blog/migration-next">How I Migrated from Gatsby to Next.js (jeffjadulco.com)</a></li>
<li><a href="https://maxrozen.com/walkthrough-migrating-maxrozen-com-gatsby-to-nextjs">A Walkthrough of migrating MaxRozen.com from Gatsby to Next.js</a></li>
<li><a href="https://blog.eyas.sh/2021/08/gatsby-to-next-js/">Migrating this Blog to Next.js from Gatsby (eyas.sh)</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Using RxJS with React]]></title>
            <link>https://dnlytras.com/blog/rxjs-react</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/rxjs-react</guid>
            <pubDate>Sun, 09 Oct 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[Co-locating business logic in React with RxJS]]></description>
            <content:encoded><![CDATA[<p>One of my biggest pet peeves with React is how easy it makes it for me to mix business logic with the UI. It feels innocent at first. &quot;Just a <code>useState</code> will do for now&quot;.</p>
<p>Well, that&#x27;s never the case. Chances are that this value will also be required or set by some other component. So what do we do? We&#x27;re moving <code>useState</code> higher in the component hierarchy and we pass it around using composition/context.</p>
<p>Then, we might want to fetch some data. React-query might do the job, but we start fragmenting the code. Who initializes the request? Where are we invalidating our queries? What happens if we need dependent queries? Where do all these live, in a messy <code>useEffect</code> hook?</p>
<p>Even the most coherent implementation will have some hacks to comply with Reacts lifecycle. After having been bitten by this for quite some time, I prefer to keep my business logic written outside React.</p>
<p>You can use Redux, or Zustand. Any will do. I prefer to use RxJS! You can still get the same sense of organization, with more fine-grained control over the data flow.</p>
<blockquote>I know that RxJS isn’t very loved by a part of the React community. I understand this. If you’re going to take anything from this blog let this be this. There’s no harm in using Redux, or Zustand. If you’re stuck with a SPA, and the URL state is limiting you, by all means, use them. Don’t reinvent the wheel. <a href="https://www.developerway.com/posts/how-to-write-performant-react-apps-with-context">Here’s a great article</a> about how tedious it is making the <code>useReducer/useContext</code> combo performant.</blockquote>
<h2>Store set-up</h2>
<p>The core idea is that all business logic for our feature page (or complex component) lives inside a closure, outside React.
This is where all the nitty-gritty of our application logic happens.</p>
<p>Our React app will receive only the bare minimum it needs to render our components. It will not know any transformations. This allows us to limit the complexity in a single file, easily testable without having to bother with the React lifecycle.</p>
<blockquote>We should still write our integration tests to discover any regressions in our React components. We just have the added benefit that we can test in isolation our business logic.</blockquote>
<p>He&#x27;s a simple case. We don’t have to guess who will use <code>orderBy</code> next.</p>
<blockquote>I prefer to export regular Observables instead of BehaviorSubjects. BehaviorSubjects can be updated by using <code>.next(myValue)</code> from the outside world.</blockquote>
<p>Alright, code...</p>
<pre><code class="language-ts">type OrderBy = &#x27;NAME_DESC&#x27; | &#x27;NAME_ASC&#x27;;

export function createExampleStore() {
  const setOrderByEvent$ = new Subject&lt;OrderBy&gt;();
  const orderBy$ = setOrderByEvent$.pipe(
    map((orderBy) =&gt; orderBy),
    startWith(&#x27;NAME_ASC&#x27;),
    shareReplay(1)
  );

  return {
    state: {
      orderBy$: withInitialValue(orderBy$, &#x27;NAME_ASC&#x27;),
    },
    actions: {
      setOrderBy: (orderBy: OrderBy) =&gt; setOrderByEvent$.next(orderBy),
    },
  };
}
</code></pre>
<p>I can hear you saying, &quot;But isn’t this very ceremonial? useState is simpler&quot;.</p>
<pre><code class="language-tsx">const [orderBy, setOrderBy] = useState&lt;OrderBy&gt;(&#x27;NAME_DESC&#x27;);
</code></pre>
<p>And I agree, it’s simpler. But what happens when:</p>
<ul>
<li>We&#x27;re asked to reset some other filter, every-time <code>orderBy</code> changes. It’s irrelevant to our component.</li>
<li>And then we need to reuse controls like these in a modal/sidebar? We probably need to change our component hierarchy.</li>
</ul>
<p>Let’s see one case where a value depends on another. Look how elegantly we update the <code>pageIndex</code>. We either set it explicitly, or it resets when the page size updates. No <code>useEffect</code> or mixing side-effects in callbacks like <code>handlePageIndexChange</code>.</p>
<p>No matter who uses these, we ensure that we have separated our design from our application state.</p>
<pre><code class="language-ts">export function createExampleStore() {
  const setPageLimitEvent$ = new Subject&lt;number&gt;();
  const pageLimit$ = setPageLimitEvent$.pipe(
    map((pageLimit) =&gt; pageLimit),
    startWith(50),
    shareReplay(1)
  );

  const setPageIndexEvent$ = new Subject&lt;number&gt;();
  const pageIndex$ = merge(
    setPageIndexEvent$.pipe(
      map((pageIndex) =&gt; pageIndex),
      startWith(0),
      shareReplay(1)
    ),
    setPageLimitEvent$.pipe(mapTo(0)) // Reset the current page, on page-size change
  );

  return {
    state: {
      pageIndex: withInitialValue(pageIndex$, 0),
      pageLimit: withInitialValue(pageLimit$, 50),
    },
    actions: {
      setPageIndex: (pageIndex: number) =&gt; setPageIndexEvent$.next(pageIndex),
      setPageLimit: (pageLimit: number) =&gt; setPageLimitEvent$.next(pageLimit),
    },
  };
}
</code></pre>
<p>We can write anything. It’s a simple TypeScript file. Let’s see a more expressive one.</p>
<p>Here we use discriminated unions to avoid impossible states. Take a look at line 30. We only export a single boolean Observable. Our component doesn’t care about the # of selected rows. Just the bare minimum. Keeping our UI flexible.</p>
<pre><code class="language-ts">type RowSelection =
  | {type: &#x27;ADD&#x27;; id: number}
  | {type: &#x27;REMOVE&#x27;; id: number}
  | {type: &#x27;CLEAR_ALL&#x27;};

export function createExampleStore() {
  const setSelectedRowEvent$ = new Subject&lt;RowSelection&gt;();

  const selectedRows$ = setSelectedRowEvent$.pipe(
    scan((rows: Array&lt;number&gt;, event) =&gt; {
      switch (event.type) {
        case &#x27;ADD&#x27;:
          return [...rows, event.id];
        case &#x27;REMOVE&#x27;: {
          return rows.filter((rowId) =&gt; rowId !== event.id);
        }
        case &#x27;CLEAR_ALL&#x27;: {
          return [];
        }
        default: {
          return rows;
        }
      }
    }, []),
    startWith([]),
    shareReplay(1)
  );

  const exportSelectedRowsEvent$ = new Subject&lt;void&gt;();
  const isExportAvailable$ = selectedRows$.pipe(
    map((rows) =&gt; rows.length &gt; 0),
    shareReplay(1)
  );
  const exportRequest$ = combineLatest([
    exportSelectedRowsEvent$,
    isExportAvailable$,
  ]).pipe(
    filter(([, isExportAvailable]) =&gt; isExportAvailable),
    map(([selectedRows]) =&gt; ({payload: {selectedIds: selectedRows}}))
    // do a network request
  );

  return {
    data: {
      exportRequest$: withInitialValue(exportRequest$, {status: &#x27;IDLE&#x27;}),
    },
    state: {
      selectedRows: withInitialValue(selectedRows$, [] as Array&lt;number&gt;),
      isExportAvailable: withInitialValue(isExportAvailable$, false),
    },
    actions: {
      selectRow: (id: number) =&gt; setSelectedRowEvent$.next({type: &#x27;ADD&#x27;, id}),
      unselectRow: (id: number) =&gt;
        setSelectedRowEvent$.next({type: &#x27;REMOVE&#x27;, id}),
      clearAllRows: () =&gt; setSelectedRowEvent$.next({type: &#x27;CLEAR_ALL&#x27;}),
      //
      exportSelectedRows: () =&gt; exportSelectedRowsEvent$.next(),
    },
  };
}
</code></pre>
<p>And finally, we can handle network requests with confidence.</p>
<ol>
<li>Not bothering with <code>useEffect</code></li>
<li>Not looking for which <code>useQuery</code> initialized that request</li>
<li>Canceling, throttling, debouncing, etc, with ease</li>
</ol>
<h2>Connecting to the React world</h2>
<p>Again, we have a simple TypeScript file. Nothing React-specific yet. Let’s wire it with our view.
We will use Context to pass it around.</p>
<blockquote>React context is a form of <code>Dependency Injection</code>. It’s not a state-management solution. Its combination with <code>useState/useReducer</code> makes it one. There’s no contradiction here.</blockquote>
<p>Here’s how we&#x27;re going to implement it.</p>
<ol>
<li>We&#x27;re rendering a route with React-Router</li>
<li>This entry component will create our store, and re-use its instance until it’s unmounted</li>
<li>It will wrap its children with a context provider, passing down the store</li>
<li>Every consumer can fetch that store, and subscribe to the stream it needs.</li>
</ol>
<blockquote>We won’t have extra rerenders. If any other observable than the one we’re subscribing to changes, nothing will happen. We’re good.</blockquote>
<p>Let’s write our one-off boilerplate. Creating a context, typecasting it, and writing our hook for the consumer. The same thing you would do for a <code>useReducer</code></p>
<pre><code class="language-ts">import {createContext, useContext, useRef} from &#x27;react&#x27;;

export type ExampleStore = ReturnType&lt;typeof createExampleStore&gt;;

export const ExampleStoreContext = createContext({} as ExampleStore);

export function useExampleStore() {
  return useContext(ExampleStoreContext);
}
</code></pre>
<p>And finally, some React silliness to ensure that we won’t override our store. Note that we can pass props to our store initialization. It could be the <code>id</code> of the project we’re working on since we normally get that info from the URL.</p>
<pre><code class="language-ts">import {createContext, useContext, useRef} from &#x27;react&#x27;;

// This imports any of the previous snippets, where a store is created
import {createExampleStore} from &#x27;./ExampleStore&#x27;;

export type ExampleStore = ReturnType&lt;typeof createExampleStore&gt;;

export const ExampleStoreContext = createContext({} as ExampleStore);

export function useExampleStore() {
  return useContext(ExampleStoreContext);
}

// can pass optional props
export function useCreateExampleStore() {
  const storeRef = useRef&lt;ExampleStore&gt;();
  if (storeRef.current === undefined) {
    // can pass optional props
    storeRef.current = createExampleStore();
  }
  const store = storeRef.current;
  return store;
}
</code></pre>
<p>And here’s what our page component would look like. A single store that can react to anything.</p>
<pre><code class="language-tsx">export function ExamplePage() {
  const store = useCreateExampleStore();

  return (
    &lt;ExampleStoreContext.Provider value={store}&gt;
      &lt;Layout&gt;
        &lt;OrderBySelector /&gt;
        &lt;PageIndexSelector /&gt;
        &lt;PageLimitSelector /&gt;
        &lt;ExportTrigger /&gt;
      &lt;/Layout&gt;
      &lt;MyModal /&gt;
      &lt;MySidebar /&gt;
      &lt;LiterallyAnything /&gt;
    &lt;/ExampleStoreContext.Provider&gt;
  );
}
</code></pre>
<p>Now let’s see what a component would look like.
It’s not different than the Redux equivalent.</p>
<pre><code class="language-tsx">export function OrderBySelector() {
  const {state, actions} = useExampleStore();
  const orderBy = useObservable(state.orderBy$);

  return (
    &lt;Select value={orderBy} onChange={actions.setOrderBy}&gt;
      &lt;Select.Option value=&quot;NAME_ASC&quot;&gt;Name ASC&lt;/Select.Option&gt;
      &lt;Select.Option value=&quot;NAME_DESC&quot;&gt;Name DESC&lt;/Select.Option&gt;
    &lt;/Select&gt;
  );
}
</code></pre>
<blockquote>You can also build some Facades. Something like <code>useSortBy</code> and export <code>orderBy</code> and its setter, so that the component won’t have full access to the store. I don’t like to optimize that much early on, but it’s a valid call.</blockquote>
<p>And let’s take closer to the <code>useObservable</code> hook, the glue that keeps everything together. Just plain React. Listen to the subscription, grab the value, bring it as state and move on.</p>
<pre><code class="language-ts">import {Observable} from &#x27;rxjs&#x27;;
import {useState, useEffect} from &#x27;react&#x27;;

type ObservableWrapper&lt;T&gt; = {
  observable: Observable&lt;T&gt;;
  initialValue: T;
};

export function withInitialValue&lt;T&gt;(
  observable: Observable&lt;T&gt;,
  initialValue: T
) {
  return Object.freeze({observable, initialValue});
}

export function useObservable&lt;T&gt;({
  observable,
  initialValue,
}: ObservableWrapper&lt;T&gt;) {
  const [state, setState] = useState&lt;T&gt;(() =&gt; initialValue);

  useEffect(() =&gt; {
    const sub = observable.subscribe(setState);
    return () =&gt; sub.unsubscribe();
  }, [observable]);

  return state;
}
</code></pre>
<p>There are various 3rd-party implementations like the one from <a href="https://observable-hooks.js.org/api/#useobservable">Observable Hooks</a>. That said, I prefer to &quot;hide&quot; the initial value that React needs for that first render. You&#x27;re free to use a simpler implementation and provide the initial value directly from the component.</p>
<h2>Without React-context</h2>
<p>With the addition of <code>useSyncExternalSyncStore</code> it’s &quot;doable&quot; to omit context. Here’s how the updated <code>useObservable</code> would look:</p>
<pre><code class="language-ts">function useObservable&lt;T&gt;(observable: BehaviorSubject&lt;T&gt;): T {
  let observableRef = useRef&lt;BehaviorSubject&lt;T&gt;&gt;(observable);

  if (observableRef.current !== observable) {
    observableRef.current = observable;
  }

  const subscribe = useCallback((handleStoreChange: VoidFunction) =&gt; {
    const subscription = observableRef.current.subscribe(handleStoreChange);
    return () =&gt; subscription.unsubscribe();
  }, []);

  const getSnapshot = useCallback(() =&gt; {
    return observableRef.current.getValue();
  }, []);
  return useSyncExternalStore(subscribe, getSnapshot);
}
</code></pre>
<p>There’s an obvious &quot;caveat&quot; that we have to use <code>BehaviorSubject</code> instead, as we need access to the &quot;current&quot; value in <code>getSnapshot</code>.</p>
<p>I like my current approach, but if React Suspense requires it, I might revisit it in the future.</p>
<h2>Final thoughts</h2>
<p>So there’s that. I find this approach gives me all the tools I need to tackle anything. No matter how simple any project starts, it has a solid foundation that will tackle any complexity.</p>
<p>By moving my business logic outside React&#x27;s lifecycle, I can write expressive code without feeling limited.</p>
<blockquote>I wonder how many people realize that React Hooks is really just disconnected, less declarative reactive programming.<div><div>— <!-- -->Ben Lesh, RxJS Team Lead</div></div></blockquote>
<p>If you don’t like RxJS, that’s fine! Use Redux Toolkit and its new RTK Query package. xState? Be my guest, love it.</p>
<p><strong>My only suggestion is that you move away from the <code>useState/useReducer</code> approach.</strong> It doesn’t scale. Especially for start-up environments where there are a lot of moving pieces in the UI. Use your time building features, and not optimizing re-renders.</p>
<blockquote>Resources:<ul>
<li><a href="https://www.developerway.com/posts/how-to-write-performant-react-apps-with-context">How to write performant React apps with Context</a></li>
<li><a href="https://beta.reactjs.org/apis/react/useEffect#fetching-data-with-effects">[React Docs] Fetching data with Effects (check the purple section)</a></li>
<li><a href="https://labs.factorialhr.com/posts/hooks-considered-harmful">Hooks Considered Harmful</a></li>
<li><a href="https://blog.saeloun.com/2021/12/30/react-18-usesyncexternalstore-api">Meet the new hook useSyncExternalStore, introduced in React 18 for external stores</a></li>
<li><a href="https://wanago.io/2019/12/09/javascript-design-patterns-facade-react-hooks/">The Facade pattern and applying it to React Hooks</a></li>
<li><a href="https://redux.js.org/faq/organizing-state#do-i-have-to-put-all-my-state-into-redux-should-i-ever-use-reacts-setstate">Do I have to put all my state into Redux? Should I ever use React&#x27;s setState()?</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Improving component's re-render performance]]></title>
            <link>https://dnlytras.com/blog/improving-rerender-performance</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/improving-rerender-performance</guid>
            <pubDate>Sat, 09 Jul 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[Making components re-render faster by pre-computing our data]]></description>
            <content:encoded><![CDATA[<blockquote>TL;DR:<!-- -->To avoid having noticeable lag between re-renders, we should identify our expensive computations and pre-compute them before they reach our components. <br/>
Re-renders in React isn’t the devil. But it can be one when we dump all of our business logic there, including the most trivial data transformations.</blockquote>
<h2>The problem</h2>
<p>Let’s examine a common scenario.</p>
<p>We get the backend response, an array of objects.</p>
<pre><code class="language-ts">const data = await fetch(&#x27;/api/books&#x27;).then((res) =&gt; res.json());!
</code></pre>
<p>We map over this array in order to render a table</p>
<pre><code class="language-tsx">&lt;Table.Container&gt;
  &lt;Table.Header&gt;
    &lt;Table.HeaderCell&gt;...&lt;/Table.HeaderCell&gt;
    &lt;Table.HeaderCell&gt;...&lt;/Table.HeaderCell&gt;
    &lt;Table.HeaderCell&gt;...&lt;/Table.HeaderCell&gt;
  &lt;/Table.Header&gt;
  {data.map((book) =&gt; (
    &lt;Table.Row key={book.uuid}&gt;
      &lt;Table.Cell&gt;
        &lt;ComponentA propA={...book.propertyA} propB={...book.propertyB} /&gt;
      &lt;/Table.Cell&gt;
      &lt;Table.Cell&gt;
        &lt;ComponentB propC={...book.propertyC} /&gt;
      &lt;/Table.Cell&gt;
      &lt;Table.Cell&gt;
        &lt;ComponentC propA={...book.propertyA} propB={...book.propertyB} /&gt;
      &lt;/Table.Cell&gt;
    &lt;/Table.Row&gt;
  ))}
&lt;/Table.Container&gt;
</code></pre>
<p>Each row of the table is a component that holds multiple sub-components</p>
<pre><code class="language-tsx">&lt;Table.Cell&gt;
  &lt;ComponentA {...book.propertyA} /&gt;
&lt;/Table.Cell&gt;
</code></pre>
<img alt="table-1" loading="lazy" width="760" height="390" decoding="async" data-nimg="1" style="color:transparent" srcSet="https://res.cloudinary.com/ds9pd4ywd/image/upload/f_webp%2cw_828%2cq_90/v1667731630/blog-images/posts/improving-rerender-performance/table-1_xeox1x.png 1x, https://res.cloudinary.com/ds9pd4ywd/image/upload/f_webp%2cw_1920%2cq_90/v1667731630/blog-images/posts/improving-rerender-performance/table-1_xeox1x.png 2x" src="https://res.cloudinary.com/ds9pd4ywd/image/upload/f_webp%2cw_1920%2cq_90/v1667731630/blog-images/posts/improving-rerender-performance/table-1_xeox1x.png"/>
<p>There shouldn’t be an issue, right? Our content changes only by applying filters, pagination, or sorting. All will trigger a fresh batch of data, so there won’t be a noticeable change.</p>
<p>But, what if we implement a functionality where we can select a range of items? For example, we can pick the first ten items, the last ten items, or the whole page. Here’s where things get tricky.</p>
<pre><code class="language-tsx">&lt;Table.Container&gt;
  &lt;Table.Header&gt;
    &lt;Table.HeaderCell&gt;
      &lt;Checkbox
        state={tableSelectionState} // &#x27;ALL&#x27; | &#x27;SOME&#x27; | &#x27;NONE&#x27;
        onChange={handleAllSelection}
      /&gt;
    &lt;/Table.HeaderCell&gt;
    &lt;Table.HeaderCell&gt;...&lt;/Table.HeaderCell&gt;
    &lt;Table.HeaderCell&gt;...&lt;/Table.HeaderCell&gt;
    &lt;Table.HeaderCell&gt;...&lt;/Table.HeaderCell&gt;
  &lt;/Table.Header&gt;
  {data.map((book) =&gt; (
    &lt;Table.Row key={book.uuid}&gt;
      &lt;Table.Cell&gt;
        &lt;Checkbox
          isChecked={selectedBooks.includes(book.uuid)}
          onChange={handleCheckboxSelect}
        /&gt;
      &lt;/Table.Cell&gt;
      &lt;Table.Cell&gt;
        &lt;ComponentA propA={...book.propertyA} propB={...book.propertyB} /&gt;
      &lt;/Table.Cell&gt;
      &lt;Table.Cell&gt;
        &lt;ComponentB propC={...book.propertyC} /&gt;
      &lt;/Table.Cell&gt;
      &lt;Table.Cell&gt;
        &lt;ComponentC propA={...book.propertyA} propB={...book.propertyB} /&gt;
      &lt;/Table.Cell&gt;
    &lt;/Table.Row&gt;
  ))}
&lt;/Table.Container&gt;
</code></pre>
<img alt="table-2" loading="lazy" width="760" height="369" decoding="async" data-nimg="1" style="color:transparent" srcSet="https://res.cloudinary.com/ds9pd4ywd/image/upload/f_webp%2cw_828%2cq_90/v1667731631/blog-images/posts/improving-rerender-performance/table-2_yk4uiw.png 1x, https://res.cloudinary.com/ds9pd4ywd/image/upload/f_webp%2cw_1920%2cq_90/v1667731631/blog-images/posts/improving-rerender-performance/table-2_yk4uiw.png 2x" src="https://res.cloudinary.com/ds9pd4ywd/image/upload/f_webp%2cw_1920%2cq_90/v1667731631/blog-images/posts/improving-rerender-performance/table-2_yk4uiw.png"/>
<p>By clicking the &#x27;select-all&#x27; checkbox in the header row, all the components will re-render. This is a very expensive operation.</p>
<p>The same applies for selecting a single row. The parent that holds the state will update and the rest of the components will re-render.</p>
<p>Oh no, it’s slow. It has always been.</p>
<p>No point thinking about these pesky re-renders. Our components were slow from the start.</p>
<h2>Extracting expensive computations</h2>
<p>We should dig deep into our components and look for the expensive computations.</p>
<p>Commonly the culprits are:</p>
<ol>
<li>Date parsing/formatting</li>
<li>Nested loops</li>
</ol>
<p>By formatting our data the moment we get them from the backend, (adding an adapter if you may), we guarantee two things:</p>
<ul>
<li>All the expensive computations are done on the get go, are co-located, easier tested and probably cached.</li>
<li>Future incompatibilities with the backend will be fixed in a single place</li>
</ul>
<h2>Final thoughts</h2>
<p>Before going crazy about multiple rerenders, memoizing everything, we should take a step back and figure out if our components are doing too much.</p>
<p>For more digging on composition, re-renders and et al, I suggest following up on this great read: <a href="https://www.developerway.com/posts/react-elements-children-parents">The mystery of React Element, children, parents and re-renders</a></p>
<blockquote>Resources:<ul>
<li><a href="https://kentcdodds.com/blog/fix-the-slow-render-before-you-fix-the-re-render">&quot;Fix the slow render before you fix the re-render&quot;</a></li>
<li><a href="https://alexsidorenko.com/blog/react-performance-slow-renders/">&quot;How to Detect Slow Renders in React?&quot;</a></li>
<li><a href="https://swizec.com/blog/you-can-use-react-query-for-slow-computation-not-just-api/">&quot;You can use React Query for slow computation, not just API&quot;</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[How I interview people]]></title>
            <link>https://dnlytras.com/blog/how-i-interview-people</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/how-i-interview-people</guid>
            <pubDate>Fri, 22 Apr 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[And how I want to be interviewed]]></description>
            <content:encoded><![CDATA[<p>Every hiring interview, much like any other meeting, should have a clear agenda. Personally, in the end I want to be able to answer these questions:</p>
<ol>
<li>Is the candidate a good fit for the team?</li>
<li>Can the candidate challenge the ways we do things, propose ideas, and improve the team?</li>
</ol>
<p>I&#x27;ve been on the other side too, and in that case my questions to answer are the following:</p>
<ol>
<li>Does this job align with my preferred working conditions?</li>
<li>Can I get ownership? How much?</li>
<li>What&#x27;s the pace like?</li>
<li>What&#x27;s their engineering culture?</li>
<li>Do they care about their craft? How do they comb their code debt?</li>
<li>Will I grow? If I stay for 4 years, will I feel I repeated the same year four times?</li>
<li>Are they remote-first or remote-out-of-necessity?</li>
<li>Is it a step up money-wise?</li>
<li>How are promotions handled? Is being an IC viable?</li>
</ol>
<h3>Starting with an honest intro</h3>
<p>I preface the interview with an introduction about me. I let them know about my role, its responsibilities, and how long I’ve been doing it. Then, I move on and tell them a bit about my previous job and any other one before that. Lastly, I tell them about how I got into the industry and how I experienced the JavaScript popularity explosion. It shouldn’t take more than a minute.</p>
<p>Then, I let the candidate share their experience.</p>
<p>I might be a spoke person for my company, but to have an honest conversation with the candidate, I have to open up. Let them know, that I’m too a developer who has a career, who has changed jobs, and will do it again. We are pretty much in the same position, and we&#x27;re having a call to see if we can collaborate.</p>
<h3>Expanding on company, team, and role specifics</h3>
<p>Next up, I will share more about the company, the position, and how we work. I don’t expect the candidate to know much about the company, but I want to make sure they know what we do.</p>
<p>Succinctly, I’ll cover</p>
<ul>
<li>The company product, the clients, and which problem it solves for them</li>
<li>The company size, its growth projection, and funding status</li>
<li>How the company is structured</li>
<li>How the team operates</li>
<li>The stack, the development process, and practices we follow.</li>
<li>The responsibilities of the role</li>
<li>Continuous learning and growth</li>
<li>Remote policies, if applicable</li>
</ul>
<p>And finally answer any questions.</p>
<blockquote><p>It’s important for the candidate to feel confident to ask questions they care about. It should be clear that it’s for their benefit and they won’t be evaluated for them. Asking about the number of meetings, employee retention, and such, is <strong>not</strong> a taboo. I’m also an employee, who had my fair share of bad experiences because I didn’t ask enough questions beforehand.</p></blockquote>
<p>It’s best to be transparent, even if it means losing the candidate. For example, if they want to work in a small company, and the plan is to hire 100 more people, they have to know it now. No surprises.</p>
<h3>Evaluating working experience</h3>
<p>Moving on, the goal is to discuss their relevant working experience. Since I’m interviewing for front-end positions, I’ll probably lead with React. I’ll point out problems I&#x27;ve faced when building apps, and ask the candidate to join me and share their thoughts, based on their experience.</p>
<p>Just having a normal discussion as if we&#x27;re discussing a ticket. I want to understand what challenges they&#x27;ve faced, their takeaways, things they care about, and all these nice stuff you don’t get from code-golfing.</p>
<h3>Aligning</h3>
<p>So far the candidate should have a good idea if they like what they hear. Maybe it’s not clear, so let’s align.</p>
<p>Here, I’m asking the candidate what are they hoping to find in our team (money aside). Essentially what is it that they value. It might sound tacky, I know, but different teams have different cultures. Better clear any misunderstandings now.</p>
<h3>Recap</h3>
<p>One final call for questions, and we wrap this up. I’ll inform them of the next steps, and ask them if they would like to proceed.</p>
<h3>Final thoughts</h3>
<p>If you’re interviewing someone, try to have an honest conversation. That&#x27;s pretty much what I&#x27;ve learnt. Maybe there are social engineering tricks I&#x27;m not aware of, but I don&#x27;t care. I know that I might be on the other side of the table soon.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Modern PHP]]></title>
            <link>https://dnlytras.com/blog/modern-php</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/modern-php</guid>
            <pubDate>Mon, 14 Mar 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[While I wasn’t paying attention, PHP got quite good]]></description>
            <content:encoded><![CDATA[<p>The last time I&#x27;ve used PHP was probably around 2017, although it was just in the context of supporting some WordPress sites. By that time 7.2 had already been released, but I had no idea. I wanted to avoid working with PHP at all costs.</p>
<p>This month I took some time to check what good things have been added to the language that I was unaware of. To be honest, things are looking great.</p>
<p>The list below is not exhaustive, I only reference what I’ll be using, or find notable.</p>
<blockquote>Contents:<ul>
<li><a href="#array-destructuring">Array destructuring</a> <strong>(v7.1)</strong></li>
<li><a href="#spread-operator-within-arrays">Spread operator within arrays</a> <strong>(v7.4)</strong> &amp; <strong>(v8.1)</strong></li>
<li><a href="#match-expressions">Match expressions</a> <strong>(v8.0)</strong></li>
<li><a href="#enumerations-enums">Enumerations (enums)</a> <strong>(v8.1)</strong></li>
<li><a href="#arrow-functions">Arrow functions</a> <strong>(v7.4)</strong></li>
<li><a href="#named-parameters">Named parameters</a> <strong>(v8.0)</strong></li>
<li><a href="#null-coalescing-operator">Null coalescing operator</a> <strong>(v7.0)</strong></li>
<li><a href="#null-coalescing-assignment-operator">Null coalescing assignment operator</a> <strong>(v7.4)</strong></li>
<li><a href="#null-safe-operator">Null-safe operator</a> <strong>(v8.0)</strong></li>
<li><a href="#spaceship-operator">Spaceship operator</a> <strong>(v7.0)</strong></li>
<li><a href="#multi-catch-exception-handling">Multi catch exception handling</a> <strong>(v7.1)</strong></li>
<li><a href="#new-string-utils">str_starts_with, str_ends_with, str_contains</a> <strong>(v8.0)</strong></li>
<li><a href="#return-types">Return Types</a> <strong>(v7.0)</strong></li>
<li><a href="#union-types">Union types</a> <strong>(v8.0)</strong></li>
<li><a href="#null-and-Void-return-types">Null and Void return types</a> <strong>(v7.1)</strong></li>
<li><a href="#never-return-type">Never return type</a> <strong>(v8.1)</strong></li>
<li><a href="#import-grouping">Grouped imports</a> <strong>(v7.0)</strong></li>
<li><a href="#constructor-property-promotion">Constructor property promotion</a> <strong>(v8.0)</strong></li>
<li><a href="#weakmaps">WeakMaps</a> <strong>(v8.0)</strong></li>
</ul></blockquote>
<h3>Array destructuring</h3>
<p>Added in v7.1</p>
<pre><code class="language-php">// arrays
$posts = [[1, 2, 3, 4], [5, 6, 7]];
[$publishedPosts, $draftPosts] = $posts;
// or skip
[, $draftPosts] = $posts;

// associative arrays
$post = [
  &#x27;title&#x27; =&gt; &#x27;Modern PHP&#x27;,
  &#x27;description&#x27; =&gt; &#x27;...&#x27;,
  &#x27;status&#x27; =&gt; &#x27;Published&#x27;
];

[&#x27;title&#x27; =&gt; $title, &#x27;description&#x27; =&gt; $description] = $post;
</code></pre>
<h3>Spread operator within arrays</h3>
<ul>
<li>Initial support for arrays in v7.4</li>
<li>Support for string-keyed (associative) arrays in v8.1</li>
</ul>
<pre><code class="language-php">// arrays
$userPosts = [1, 2, 3, 4, 5];
$userPosts = [...$userPosts, 6];

// associative arrays
$userPosts = [
  &#x27;id-a&#x27; =&gt; &#x27;Published&#x27;,
  &#x27;id-b&#x27; =&gt; &#x27;Draft&#x27;,
];
$userPosts = [...$userPosts, &#x27;id-c&#x27; =&gt; &#x27;Draft&#x27;];
</code></pre>
<h3>Match expressions</h3>
<p>Added in v8.1</p>
<p>The one thing I’m most jealous it’s missing in JavaScript</p>
<pre><code class="language-php">// using a switch statement
switch ($status) {
 case &#x27;Published&#x27;:
  $message = &#x27;The post has been published&#x27;;
  break;
 case &#x27;Draft&#x27;:
  $message = &#x27;The post is in draft state&#x27;;
  break;
}

// as a match expression
$message = match($status) {
  &#x27;Published&#x27; =&gt; &#x27;The post has been published&#x27;,
  &#x27;Draft&#x27; =&gt; &#x27;The post is in draft state&#x27;,
};
</code></pre>
<h3>Enumerations (enums)</h3>
<p>Added in v8.1</p>
<pre><code class="language-php">// previously using a class
class Status {
  const DRAFT = &#x27;Draft&#x27;;
  const PUBLISHED = &#x27;Published&#x27;;
  const ARCHIVED = &#x27;Archived&#x27;;
}

// using an enum
enum PostStatus {
  case Draft;
  case Published;
  case Archived;
}

$status = PostStatus::Draft;
</code></pre>
<h3>Arrow functions</h3>
<p>Added in v7.4</p>
<p><code>fn (args) =&gt; expression</code></p>
<p>(!) Arrow functions can’t be multi-line</p>
<pre><code class="language-php">$publishedPosts = array_filter($posts,
  fn($post) =&gt; $post-&gt;status === &#x27;Published&#x27;
)
</code></pre>
<h3>Named parameters</h3>
<p>Added in v8.0</p>
<pre><code class="language-php">function enableThisConfig($optionA, $optionB, $somethingElse, $anotherOne) {
 //
}

// 6 months later reading this.. ??
enableThisConfig(true, true, false, &#x27;DEV&#x27;);

// using named params
enableThisConfig(
  optionA: true,
  optionB: true,
  somethingElse: false,
  anotherOne: &#x27;DEV&#x27;,
);
</code></pre>
<h3>Null coalescing operator</h3>
<p>Added in v7.0</p>
<pre><code class="language-php">// this annoying thing
$data[&#x27;name&#x27;] = isset($data[&#x27;name&#x27;]) ? $data[&#x27;name&#x27;] : &#x27;Guest&#x27;;
// to
$data[&#x27;name&#x27;] = $data[&#x27;name&#x27;] ?? &#x27;Guest&#x27;;
</code></pre>
<h3>Null coalescing assignment operator</h3>
<p>Added in v7.4</p>
<pre><code class="language-php">// our previous example
$data[&#x27;name&#x27;] = $data[&#x27;name&#x27;] ?? &#x27;Guest&#x27;;
// to
$data[&#x27;name&#x27;] ??= &#x27;Guest&#x27;;
</code></pre>
<h3>Null-safe operator</h3>
<p>Added in v8.0</p>
<pre><code class="language-php">class User {
  public function getPreferences() {}
}
class Preferences {
  public function getColorScheme() {}
}

$user = new User;

// ❌ preferences could be null
$colorScheme = $user-&gt;getPreferences()-&gt;getColorScheme();

// alternative
$preferences = $user-&gt;getPreferences();
$colorScheme = $preferences ? $preferences-&gt;getColorScheme() : null;

// using null-safe operator
$colorScheme = $user-&gt;getPreferences()?-&gt;getColorScheme();

</code></pre>
<h3>Spaceship operator</h3>
<p>Added in v7.0</p>
<pre><code class="language-php">$result = 1 &lt;=&gt; 1 // 0
$result = 1 &lt;=&gt; 2 // -1
$result = 2 &lt;=&gt; 1 // 1

$array = [1, 4, 5, 6, 7, 8 ,9 ,2];
usort($array, fn($a, $b) =&gt; $a &lt;=&gt; $b);
</code></pre>
<h3>Multi-catch exception handling</h3>
<p>Added in v7.1</p>
<pre><code class="language-php">try {
 // ...
} catch(ErrorA | ErrorB $e) {
 //
} catch(Exception $e) {
 // general case
}
</code></pre>
<h3>New string utils</h3>
<p>Added in v8.0</p>
<p>Previously we would use <code>strpos</code> or some other creative solution.</p>
<pre><code class="language-php">$string = &#x27;Modern PHP&#x27;;

if (str_starts_with($string, &#x27;Modern&#x27;)) {}
if (str_ends_with($string, &#x27;PHP&#x27;)) {}
if (str_contains($string, &#x27;Modern&#x27;)) {}
</code></pre>
<h3>Return types</h3>
<p>Added in v7.0</p>
<pre><code class="language-php">function getPost(int $postId): Post {
 //
}
</code></pre>
<h3>Union types</h3>
<p>Added in v8.0</p>
<pre><code class="language-php">function updateTotal(int|float $cost) {
 //
}
</code></pre>
<h3>Null and Void return types</h3>
<p>Added in v7.1</p>
<pre><code class="language-php">// Notice the `?` before the type
function getPost(int $postId): ?Post {
 // can return null
}

function setPostTitle(int $postId, string $title): void {
  persistInDB($postId, $postTitle);
 // won&#x27;t return anything, or optionally can do:
 // `return;`
}
</code></pre>
<h3>Never return type</h3>
<p>Added in v8.1</p>
<pre><code class="language-php">function notImplemented(): never {
  throw new Exception(&#x27;Not implemented&#x27;);
}
</code></pre>
<h3>Import grouping</h3>
<p>Added in v7.0</p>
<p>A small but welcome addition</p>
<pre><code class="language-php">// all in separate lines
use App\Entity\Task;
use App\Entity\Reminder;
use App\Entity\Todo;

// now
use App\Entity\{Task, Reminder, Todo};
</code></pre>
<h3>Constructor property promotion</h3>
<p>Added in v8.0</p>
<pre><code class="language-php">// from this
class User {
  public string $name;
  public Address $address;

  public function __construct(string $name, Address $address) {
    $this-&gt;name = $name;
    $this-&gt;address = $address;
 }
}

// to this
class User {
  public function __construct(protected string $name, protected Address $address) {
  // nothing else needed
  }
}
</code></pre>
<h3>Weakmaps</h3>
<p>Not really using Weakmaps, not even in JavaScript, but I find it a great addition</p>
<pre><code class="language-php">// code taken from https://php.watch/versions/8.0/weakmap
class CommentRepository {
  private WeakMap $comments_cache;
  public function getCommentsByPost(Post $post): ?array {
    if (!isset($this-&gt;comments_cache[$post])) {
      $this-&gt;comments_cache[$post] = $this-&gt;loadComments($post);
    }

    return $this-&gt;comments_cache[$post]
  }
}
</code></pre>
<hr/>
<p>I have to say I’m very happy to see these features, and I look forward to the next PHP releases with great excitement. Although I’ll be mostly working with Laravel and its extensive library (<a href="https://laravel.com/docs/9.x/collections">collections</a> &amp; <a href="https://laravel.com/docs/9.x/helpers">helpers</a>) watching the language grow is lovely.</p>
<p>🐘</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Nominal types in TypeScript]]></title>
            <link>https://dnlytras.com/blog/nominal-types</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/nominal-types</guid>
            <pubDate>Tue, 08 Mar 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[Improving our type-safety with fine-grained types]]></description>
            <content:encoded><![CDATA[<p>Consider the following case:</p>
<pre><code class="language-ts">type PostId = string;
type UserId = string;

function publishPost(userId: UserId, postId: PostId) {
  //
}

const postId: PostId = &#x27;post-xyz-123&#x27;;
const userId: UserId = &#x27;user-xyz-123&#x27;;

publishPost(userId, postId); // correct
publishPost(postId, userId); // would love to error, but passes
</code></pre>
<p><a href="https://www.typescriptlang.org/play?#code/C4TwDgpgBACg9gZ2ASQCZQLxSQJwJYB2A5gNwBQokUAqghDmptsPseWQGYCuBAxsHjgEoYLgCMANngQALeEgAUXOg1QAuGirQAaEYhTrY+tAEooAbzJQA9NahQyAXzJleQpHqRoN8g0wDkYPoAtAAeIABewQCMAEwAzP7kbgQeyvTemhnoWP7pOGGRMQn+LqKS0nL6SlqoukFeqCYkNnZuODgQ-GTlUrK+Cg0GuvmmLbbYMnBcEuj0OHA4umJcwCIAhgh0CEA">Playground Link</a></p>
<p>We would believe that TypeScript would prevent us pass a value of type <code>PostId</code> when <code>UserId</code> is expected.</p>
<p>Unfortunately, both <code>userId</code> &amp; <code>postId</code> can be used interchangeably. For the compiler, the names are irrelevant, as they are an alias to the same <code>string</code> type.</p>
<p>TypeScript doesn’t care how we name our values, only if the shape they describe (e.g <code>string</code>) can satisfy the constraints.</p>
<h2>Structural typing</h2>
<blockquote>If you want a thorough explanation, feel free to read <a href="https://www.typescriptlang.org/docs/handbook/type-compatibility.html">the entry from the official docs</a> and skip this section.</blockquote>
<p>TypeScript uses structural typing. Which is a bit different from what you might have used in Java.</p>
<p>In our case TypeScript doesn’t care if a type has an explicit inheritance from another. If its contents are equal (or a superset) it’s fine. Here’s another example:</p>
<pre><code class="language-ts">type Student = {name: string};
type Teacher = {name: string};

const student: Student = {name: &#x27;Benjamin&#x27;};
const teacher: Teacher = student; // no issue
</code></pre>
<p>For TypeScript both <code>Student</code> &amp; <code>Teacher</code> are equivalent. As long as the contents are the same, any implementation that uses a <code>Student</code>value, can be also satisfied with a <code>Teacher</code> one. In languages like Java, this is a no-go.</p>
<pre><code class="language-java">class Student {
 public String name;
}
class Teacher {
 public String name;
}

// error: incompatible types: Teacher cannot be converted to Student
Student student = new Teacher();
</code></pre>
<p>Of course, in our TypeScript snippet, if we add another property to the <code>Teacher</code>, we will be notified of it.</p>
<pre><code class="language-ts">// Property &#x27;classes&#x27; is missing in type &#x27;Student&#x27; but required in type &#x27;Teacher&#x27;.(2741)
type Teacher = {name: string; classes: Array&lt;string&gt;};
</code></pre>
<p>We might feel a false sense of security when using TypeScript, expecting the compiler to save us. But, in reality, we might be introducing similar bugs in our code if we&#x27;re not careful.</p>
<h2>How can we fix this?</h2>
<p>We need a way to differentiate between the two types.</p>
<p>First, we create a symbol that we can use to identify our nominal types. It’s worth noting that it won’t exist after compilation.</p>
<pre><code class="language-ts">declare const __nominal__type: unique symbol;
</code></pre>
<p>Then we enrich our base type (e.g. <code>string</code>) with a symbol that we can use to identify our nominal types. Two types are considered equivalent if they have the same symbol, in addition to their contents.</p>
<pre><code class="language-ts">declare const __nominal__type: unique symbol;

export type Nominal&lt;Type, Identifier&gt; = Type &amp; {
  readonly [__nominal__type]: Identifier;
};
</code></pre>
<p>Some examples, where we might need a distinction:</p>
<pre><code class="language-ts">type UserId = Nominal&lt;string, &#x27;UserId&#x27;&gt;;
type PostId = Nominal&lt;string, &#x27;PostId&#x27;&gt;;
type OrgId = Nominal&lt;string, &#x27;OrgId&#x27;&gt;;
type ProjectId = Nominal&lt;string, &#x27;ProjectId&#x27;&gt;;

type CustomerId = Nominal&lt;string, &#x27;CustomerId&#x27;&gt;;
type ClientId = Nominal&lt;string, &#x27;ClientId&#x27;&gt;;

type projectInvitationToken = Nominal&lt;string, &#x27;projectInvitationToken&#x27;&gt;;
type passwordResetToken = Nominal&lt;string, &#x27;passwordResetToken&#x27;&gt;;

type EUR = Nominal&lt;number, &#x27;EUR&#x27;&gt;;
type USD = Nominal&lt;number, &#x27;USD&#x27;&gt;;

type Miles = Nominal&lt;number, &#x27;Miles&#x27;&gt;;
type Kilometers = Nominal&lt;number, &#x27;Kilometers&#x27;&gt;;
</code></pre>
<p>There you have it</p>
<pre><code class="language-ts">type UserId = Nominal&lt;string, &#x27;UserId&#x27;&gt;;
type PostId = Nominal&lt;string, &#x27;PostId&#x27;&gt;;

let userId = &#x27;xyz&#x27; as UserId;
let postId = &#x27;xyz&#x27; as PostId;

/*
Type &#x27;PostId&#x27; is not assignable to type &#x27;UserId&#x27;.
 Type &#x27;PostId&#x27; is not assignable to type &#x27;{ readonly [__nominal__type]: &quot;UserId&quot;; }&#x27;.
 Types of property &#x27;[__nominal__type]&#x27; are incompatible.
 Type &#x27;&quot;PostId&quot;&#x27; is not assignable to type &#x27;&quot;UserId&quot;&#x27;.
*/
userId = postId;
</code></pre>
<h2>The elephant in the room</h2>
<p>Yes, we have to use <code>as</code>. We can’t assign the <code>xyz</code> string directly.</p>
<pre><code class="language-ts">// fails
/*
Type &#x27;string&#x27; is not assignable to type &#x27;UserId&#x27;.
 Type &#x27;string&#x27; is not assignable to type &#x27;{ readonly [__nominal__type]: &quot;UserId&quot;; }&#x27;.
*/
let userId: UserId = &#x27;xyz&#x27;;
/*
Type &#x27;string&#x27; is not assignable to type &#x27;PostId&#x27;.
 Type &#x27;string&#x27; is not assignable to type &#x27;{ readonly [__nominal__type]: &quot;PostId&quot;; }&#x27;.
*/
let postId: PostId = &#x27;xyz&#x27;;

// works
let userId = &#x27;xyz&#x27; as UserId;
let postId = &#x27;xyz&#x27; as PostId;
</code></pre>
<p>You might consider it cheating, but TypeScript has sharp knives in its feature set. It’s up to us to know when to use them.</p>
<p>That said, we can improve it. Let’s concentrate the <code>as</code> type-casting in one place.</p>
<pre><code class="language-ts">function UserId(id: string): UserId {
  // validation can go here
  return id as UserId;
}

function PostId(id: string): PostId {
  // validation can go here
  return id as PostId;
}

let userId = UserId(&#x27;id&#x27;);
let postId = PostId(&#x27;id&#x27;);
</code></pre>
<p><a href="https://www.typescriptlang.org/play?#code/CYUwxgNghgTiAEYD2A7AzgF3gfWypAtgJYpQS4YCeADiAFzwCuKRAjowmpQQEZIQBuAFAgAHtSQwsVWvAByhEmQA8AFRogANPACSoFBiIAzIiBgA+eAF5462QDJ4AbyHx4cKMFQRK8ANq4+MSk5NgyIAC6DHogBsamMMIAvsJC4fAAqmhmetbyiiHKmDAkAObaAORZOcAV5sLpAApImLk2CsEqxWWVza219UJCRsxghqiZ2TB6ABREwAzdKKUAlAzV08DOrm7uIBiMMCjw8-BQaJM1QklDIyhjRBN9GLPzixgly2vwz7kuu3sDkcTltzj8Wi9gNchhB9kwpnp1gitjYNrMKvMKithLCsBJ+gxfijwf0ZhjatigA">Playground Link</a></p>
<p>It will also allow us to take it a step further and introduce validation, using a library like <a href="https://github.com/colinhacks/zod">Zod</a>.</p>
<blockquote>Resources:<ul>
<li><a href="https://blog.beraliv.dev/2021-05-07-opaque-type-in-typescript">Opaque Types</a></li>
<li><a href="https://github.com/microsoft/TypeScript/issues/202">[GitHub] Support some non-structural (nominal) type matching #202</a></li>
<li><a href="https://medium.com/redox-techblog/structural-typing-in-typescript-4b89f21d6004">Structural Typing in TypeScript</a></li>
<li><a href="https://github.com/gcanti/newtype-ts">newtype-ts: Implementation of newtypes in TypeScript</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Working remotely]]></title>
            <link>https://dnlytras.com/blog/remote-work</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/remote-work</guid>
            <pubDate>Sun, 10 Oct 2021 00:00:00 GMT</pubDate>
            <description><![CDATA[Reflecting on remote work, the good and the bad things]]></description>
            <content:encoded><![CDATA[<blockquote>I wrote this in 2018 when I landed my first remote job.<br/>I updated it in 2021 with COVID in mind.<br/>Now, I&#x27;m a father and I can&#x27;t think of a better perk.</blockquote>
<p>Working remotely is a whole different world. I world that I&#x27;ve been living since 2017.</p>
<p>For some people cutting out the social interactions of the office, while having to work from the very same place they sleep is a nightmare. I get it. It sounds horrible if you put it that way. Here&#x27;s the benefits I see:</p>
<ul>
<li>
<p><strong>Avoiding commuting</strong></p>
<p>I wouldn’t lie if this isn’t the biggest factor. Getting back these ~6 hours per week of contemplating life in the subway is pretty sweet.
Believe me, when there are strikes or heavy rain, my morning coffee tastes a bit better.</p>
</li>
<li>
<p><strong>Avoiding a noisy office</strong></p>
<p>Sometimes you just want to do some work and call it a day. No water-cooler conversations, no forced banter, just some good honest work.</p>
</li>
<li>
<p><strong>Having a healthier diet</strong></p>
<p>Being able to cook a rich breakfast and having quality snacks around are awesome perks I’ve come to value highly. Eating in a rush with everyone or resorting to takeaway food isn’t something sustainable. Taking a break to cook mindfully really helps me get back to work refreshed.</p>
</li>
<li>
<p><strong>Better workout routine</strong></p>
<p>Morning workouts are more chill. Knowing I just have to go back home, take a shower and start my work without rush or preparation helps me immensely. It’s also a task completed before work even starts. Previously I would have to find the courage, after a tiring day, to hit the gym during peak hours.</p>
</li>
<li>
<p><strong>Having greater flexibility</strong></p>
<p>Taking my break to go for groceries mid-day, having lunch with my <del>SO</del> wife, or being there to receive a package, are small wins that matter.</p>
</li>
<li>
<p><strong>Working with a distributed team</strong></p>
<p>Nowadays my teammates are scattered throughout the world. If you want to build a diverse, and skilled team you have to recruit from everywhere. This makes Zoom meetings an unavoidable reality. Why go from point A to B, have a couple of calls, and return?</p>
</li>
<li>
<p><strong>Using my own bathroom</strong></p>
<p>I don’t think I have to expand on this.</p>
</li>
</ul>
<h3>Working as a remote team</h3>
<p>I like remote work because I work asynchronously. Working from distance demands that your team transforms the way it works. Progress only happens when there is clear communication, proper documentation, and respect for other people&#x27;s time.</p>
<p>Something not unique to remote companies. Everyone benefits from them, but it’s easier to suppress when working in person. Working remotely forces your hand to tackle these inefficiencies if you want to make it.</p>
<h3>Issues</h3>
<p>Of course, there are issues, I’ll only highlight some I&#x27;ve experienced.</p>
<ul>
<li>
<p><strong>Difficulties in communication</strong></p>
<p>Over-communicating isn’t a bad thing. If anything it’s expected and welcomed so everyone knows what’s going on. I’ve written a piece for this called <a href="/blog/chat-in-public">All company chat should be in public</a>.</p>
</li>
<li>
<p><strong>Feeling that someone hates your guts</strong></p>
<p>Unless you&#x27;ve come to know your fellow coworkers in person, it’s hard to have a complete idea of everyone’s personality. Assuming positive intent is critical. Not everyone likes to use exclamation marks and emojis but they are probably awesome to hang out in person though.</p>
</li>
<li>
<p><strong>Loneliness</strong></p>
<p>This is a hard topic. Some of the most meaningful friendships I&#x27;ve made are with former coworkers. But on the flip side, some of the most transformative experiences I&#x27;ve had, were because I made time due to not working in the office.</p>
</li>
</ul>
<h3>Not just your home</h3>
<p>A big misconception is that you have to work from home. For some people, that&#x27;s just not possible.
When the home is too much, you can always rent a co-working space for a day or two.</p>
<p>This is the ideal arrangement for me. I want to feel the energy of productive people, motivating me to work, without knowing me.</p>
<hr/>
<p>Remote work isn’t for everyone. Unfortunately, Covid19 made this a reality for many people who don’t prefer it. I feel for everyone who has to cope it this situation, and most specifically all of you who have to cope and smug pricks like me who enjoy it.</p>
<p>I’m just glad that working remotely is now mainstream, and people who were previously trapped can finally find more options.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[All company chat should be in public]]></title>
            <link>https://dnlytras.com/blog/chat-in-public</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/chat-in-public</guid>
            <pubDate>Thu, 09 Sep 2021 00:00:00 GMT</pubDate>
            <description><![CDATA[How to make remote work a lot better, with a bit of an effort]]></description>
            <content:encoded><![CDATA[<p>I’ll try to convince you to ban all 1-1 and group chats for work-related matters in your company&#x27;s communication tool.</p>
<p>It probably sounds pretty aggressive but honestly, it’s not. People can still talk about their lives, worries, causes, etc, in their direct messages but not about anything related to the product.</p>
<p>Why?</p>
<h2>You&#x27;re leaving information locked behind</h2>
<p>1-1 &amp; group chats are not indexed. You can’t reference any such discussion by a link in the future, and if the people leave the company, it’s gone forever.</p>
<p>By posting every little discussion in the open:</p>
<ul>
<li>You invite anyone to contribute to the discussion. They probably had context you didn’t know, and you wouldn’t have them invited otherwise</li>
<li>You are helping others understand why a decision was being made, even if you’re not around</li>
<li>You are sharing product knowledge</li>
<li>You are transparent about the work you’re doing</li>
</ul>
<p>Discussing in the open keeps everyone in the loop. For now and the future.</p>
<h2>You&#x27;re not helping cultivate a culture of communication</h2>
<p>If the majority of the communication is hidden in group chats &amp; private channels, you’re creating silos.</p>
<p>People will not feel confident discussing and raising questions in the open. If none is doing it, why would anyone risk making a &quot;stupid&quot; question for everyone to see? Better send a direct message, or just not ask it at all.</p>
<p>This right here is why managers think remote work isn’t working for their team. People are making mistakes due to lack of context, are out-of-sync and generally, chaos ensues.</p>
<p>Foster a collaborative environment, where people are expected to communicate in the open and you will be surprised.</p>
<h2>You might not need daily stand-ups anymore</h2>
<p>By creating a culture of communicating, we don’t have to waste everyone’s time to see if anything is blocking their progress. What&#x27;s the point to schedule a slot to announce that? Just write about it, and leave it for everyone to read.</p>
<p>Over-communicate. Ask questions, post findings, share progress in the open.</p>
<p>Boom, you saved 5 meetings per week.</p>
<h2>Other improvements</h2>
<ol>
<li><strong>Enforce the same usernames across all applications.</strong> Reduce cognitive load, make it easier for people to ping you across Slack, Jira, GitHub, etc. Are you relaying messages from every service to your Slack? Enjoy the free cross-service ping integration</li>
<li><strong>Pay your services to allow indexing</strong></li>
<li><strong>Use threads.</strong> Each discussion should be separate and self-contained. Be very strict about this</li>
<li><strong>Suppress notifications except direct mentions and threads.</strong> Periodically check your channels of interest, but only be notified of what matters</li>
</ol>
<h2>Final thoughts</h2>
<p>Communication is already hard, let’s not add more roadblocks to it. Inefficiency makes people disinterested. Disinterested people make mistakes. Mistakes cause tension.</p>
<ul>
<li>&quot;Nobody cares, why should I?&quot;</li>
<li>&quot;Why didn’t we think about this before doing X&quot;</li>
<li>&quot;We suck at communicating, let’s schedule a zoom&quot;</li>
</ul>
<p>By putting discussions in the open, you document decisions and provide context for everyone to see. But that&#x27;s only the start.</p>
<p>You cultivate a culture where people are eager to help each other, are proactive, and share.</p>
<p>Why not try this approach for a week?</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Ditching manual releases with Changesets]]></title>
            <link>https://dnlytras.com/blog/using-changesets</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/using-changesets</guid>
            <pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
            <description><![CDATA[Hassle free versioning & changelog management with Changesets]]></description>
            <content:encoded><![CDATA[<p>The goal is simple. We want to build a library without having to manually:</p>
<ol>
<li>Update the version</li>
<li>Update the changelog</li>
<li>Create a new release</li>
<li>Publish to NPM</li>
</ol>
<p>Enter <a href="https://github.com/atlassian/changesets">Changesets</a>. This handy package solves all of our problems in a very elegant way.</p>
<h3>TL;DR</h3>
<p>OK, here’s how it works in a CI environment, like GitHub Actions.</p>
<ol>
<li>If your changes should update the package version, you have to include a small markdown file (changeset). No worries about the format, it’s auto-generated by the Changeset CLI. Just run <code>npx/yarn changeset</code>, follow the prompt and you’re good.</li>
<li>Push the changes or merge the PR - essentially just get the updates to the base branch.</li>
<li>A GitHub action will check for any number of these specific markdown files, calculate the final semver and finally open a PR with the proposed version &amp; the generated changelog. <a href="https://github.com/emotion-js/emotion/pull/2394">Here’s an example from the emotion repository</a>. You read that right. If Alice made a patch change and Bob a minor &amp; a patch one, the final version will be just a minor bump.</li>
<li>When you’re ready, merging this PR will add the changelog and will trigger another GitHub action that will publish the package in the background.</li>
</ol>
<h3>Installation &amp; configuration</h3>
<p>For simplicity&#x27;s sake, I will use <code>npm</code>. Replace <code>npx install</code> with <code>yarn add</code> &amp; <code>npx</code> with <code>yarn</code> depending on your favorite package manager.</p>
<p>Let’s start by adding the CLI package that helps us with the version bumping, and another one to enrich our changelog with more metadata.</p>
<pre><code class="language-sh">npm install @changesets/cli @changesets/changelog-github
</code></pre>
<p>Initialize the configuration..</p>
<pre><code class="language-sh"> npx changeset init
</code></pre>
<p>and you will be greeted by the following message:</p>
<pre><code class="language-txt">🦋  Thanks for choosing changesets to help manage your versioning and publishing
🦋
🦋  You should be set up to start using changesets now!
🦋
🦋  info We have added a `.changeset` folder, and a couple of files to help you out:
🦋  info - .changeset/README.md contains information about using changesets
🦋  info - .changeset/config.json is our default config
</code></pre>
<p>All good. If you navigate inside <code>.changeset/</code>, you&#x27;ll find the configuration file</p>
<pre><code class="language-json">{
  &quot;$schema&quot;: &quot;https://unpkg.com/@changesets/config@1.6.1/schema.json&quot;,
  &quot;changelog&quot;: &quot;@changesets/cli/changelog&quot;,
  &quot;commit&quot;: false,
  &quot;linked&quot;: [],
  &quot;access&quot;: &quot;restricted&quot;,
  &quot;baseBranch&quot;: &quot;master&quot;,
  &quot;updateInternalDependencies&quot;: &quot;patch&quot;,
  &quot;ignore&quot;: []
}
</code></pre>
<p>I would suggest updating only this rule for now, which will include the PR &amp; author links next to each commit entry.</p>
<pre><code class="language-diff">- &quot;changelog&quot;: &quot;@changesets/cli/changelog&quot;,
+ &quot;changelog&quot;: [&quot;@changesets/changelog-github&quot;, {&quot;repo&quot;: &quot;your-name/your-repository&quot;}],
</code></pre>
<p>If you’re into tweaking it, <a href="https://github.com/atlassian/changesets/blob/main/docs/config-file-options.md">here you can find detailed explanations for all the rules</a>.
Finally, let’s update the npm scripts, and let Changeset take over the release.</p>
<pre><code class="language-json">&quot;scripts&quot;: {
  &quot;changeset&quot;: &quot;changeset&quot;,
  &quot;prerelease&quot;: &quot;npm run build &amp;&amp; npm run test&quot;, // optional
  &quot;release&quot;: &quot;changeset publish&quot;
},
</code></pre>
<h3>Setting up Workflows</h3>
<p>Before we proceed we need two tokens:</p>
<ol>
<li>A GitHub token with <code>repo</code>, <code>write:packages</code> permissions</li>
<li><a href="https://docs.npmjs.com/creating-and-viewing-access-tokens">An NPM token with write permission</a></li>
</ol>
<p>After you obtain them, make sure to add them under GitHub Secrets, so that GitHub Actions can access them.</p>
<hr/>
<p>Let’s create a <code>release.yml</code> under <code>.github/workflows/</code></p>
<blockquote>
<p>Code taken from <a href="https://github.com/changesets/action">changesets/action</a></p>
</blockquote>
<pre><code class="language-yaml">name: Release
on:
  push:
    branches:
      - master # or main
jobs:
  release:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
        with:
          # This makes action fetch all Git history so that Changesets can generate changelogs with the correct commits
          fetch-depth: 0
      - name: Use Node.js 14.x
        uses: actions/setup-node@v1
        with:
          version: 14.x
      - name: Install Dependencies
        run: yarn
        env:
          NPM_TOKEN: ${{ secrets.NPM_TOKEN }} # Ensure to have this set up under GitHub secrets
      - name: Create Release Pull Request or Publish to npm
        uses: changesets/action@master
        with:
          publish: yarn release
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Ensure to have this set up under GitHub secrets
          NPM_TOKEN: ${{ secrets.NPM_TOKEN }} # Ensure to have this set up under GitHub secrets
</code></pre>
<p>This action will <a href="https://github.com/emotion-js/emotion/pull/2394">create (or update) a PR</a> every time a new changeset is added to the main branch. It will calculate the final version for the next release and prepare the changelog. When the said PR is merged, these changes will be pushed to the main branch while triggering a background job to push the package to NPM. A couple of moments later you might notice that there is a new package version if you’re using the GitHub registry. That&#x27;s it!</p>
<p>You might also want to add the <a href="https://github.com/apps/changeset-bot">changeset-bot</a> that will notify you in PRs if there are any changesets. I find this very useful as I’m introducing new components in a design-system I’m working on, and keep forgetting to add changelog entries.</p>
<h3>Adding a changeset</h3>
<p>Enough of the configuration, let’s do some work, and call <code>npx changeset</code> to version the changes. Remember, not every commit or feature needs a changeset. Updating docs or improving the test suite doesn’t justify creating a new release for your end-users.</p>
<p>Anyway back to the terminal, let’s pick our change type..</p>
<pre><code class="language-txt">$ npx changeset
🦋  What kind of change is this for change? (current version is 1.0.0)
❯ patch
  minor
  major
</code></pre>
<p>Fill with the appropriate summary for the changelog..</p>
<pre><code class="language-txt">🦋  What kind of change is this for change? (current version is 1.0.0) · patch
🦋  Please enter a summary for this change (this will be in the changelogs). Submit empty line to open external editor
🦋  Summary › config: introduce changesets
</code></pre>
<p>And call it a day. Under <code>.changeset/</code> you will notice a new markdown file (its name is randomly generated), with the change-type and summary.</p>
<pre><code class="language-md">---
&#x27;change&#x27;: patch
---

config: introduce changesets
</code></pre>
<p>Push the file along with the rest of the changes, and let the GitHub actions do the heavy-lifting for you. As for the markdown file, it will be deleted by our GitHub action when the entry that&#x27;s referencing is added to the changelog.</p>
<h3>Final thoughts</h3>
<p>Changeset is godsend. Recently I had to set up a library and getting versioning/automatic publishing out of the way early was very high on my list. Thanks to Changesets the bulk of the work is invisible to me, and I only have to make the decision when to version my changes.</p>
<p>Previously I was using <a href="https://github.com/semantic-release/semantic-release">Semantic release</a>, but I prefer keeping my commit format separate from the versioning process.</p>
<p>Many great libraries like <a href="https://github.com/mobxjs/mobx">MobX</a>, <a href="https://github.com/statelyai/xstate">XState</a> &amp; <a href="https://github.com/formium/formik">Formik</a> are using it, so feel free to include it in your projects.</p>
<blockquote>Resources:<ul>
<li><a href="https://github.com/atlassian/changesets/tree/main/docs">Docs</a></li>
<li><a href="https://github.com/changesets/action">Changeset action</a></li>
<li><a href="https://github.com/apps/changeset-bot">Changeset bot</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[How I load my web fonts in Gatsby]]></title>
            <link>https://dnlytras.com/blog/how-i-load-fonts</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/how-i-load-fonts</guid>
            <pubDate>Sun, 03 Jan 2021 00:00:00 GMT</pubDate>
            <description><![CDATA[Preloading self-hosted web fonts, for greater performance]]></description>
            <content:encoded><![CDATA[<blockquote><strong>This is a very outdated post.</strong> I don&#x27;t use Gatsby anymore. Feel free to check something else instead.</blockquote>
<hr/>
<p>In my website I use two typefaces <a href="https://rsms.me/inter/">Inter</a> &amp; <a href="https://github.com/tonsky/FiraCode">Fira Code</a>. The first font-family covers everything on this website except the code blocks.</p>
<p>My main pet peeve is that before Inter loads, the visitor will briefly notice a fallback font. This is pretty much inevitable unless:</p>
<ol>
<li>You use a loader (suboptimal)</li>
<li>Your website transmits everything during the first connection roundtrip (~14K, sometimes impossible)</li>
<li>You want to <a href="https://css-tricks.com/really-dislike-fout-font-display-optional-might-jam/">stick to the fallback font</a></li>
</ol>
<p>But we&#x27;ll work with what to have.</p>
<h2>Other options</h2>
<h3>Using Google fonts</h3>
<ul>
<li>Putting an external dependency on Google is not my cup of tea, privacy-wise</li>
<li><a href="https://github.com/JulietaUla/Montserrat/issues/60#issue-271920884">You&#x27;re not in control</a> of the updates</li>
<li>Since Chrome v86, cross-site resources like fonts can’t be shared on the same CDN. No performance boost using Google Fonts anymore</li>
</ul>
<h3>Using fonts from NPM</h3>
<p>Like <a href="https://github.com/KyleAMathews/typefaces">typefaces</a> or <a href="https://github.com/fontsource/fontsource">fontsource</a>.</p>
<ul>
<li>The fonts will be parsed by Webpack, will have a hash assigned to them breaking cache. There are workarounds but the alternatives are simpler</li>
<li>You can’t preload the fonts easily</li>
</ul>
<h2>How to do it</h2>
<h3>1. Self-host the fonts</h3>
<p>I like to manually fetch the desired fonts from the <a href="https://github.com/fontsource/fontsource/tree/master/packages">fountsource</a> repo. By selecting the specific language subsets you need, you can trim a ~150kb font down to ~40kb, which is bonkers.</p>
<p>To avoid pulling multiple fonts, and since I don’t care for old browsers, I use the variable version of Inter &amp; Fira Code (<a href="https://caniuse.com/variable-fonts">support</a>), served as <a href="https://caniuse.com/woff2">woff2</a>.</p>
<pre><code class="language-txt">/static
  /fonts
    - fira-code-var-latin.woff2
    - inter-var-latin.woff2
</code></pre>
<h3>2. Write the font declarations</h3>
<p>And import the file from <code>gatsby-browser.js</code></p>
<pre><code class="language-css">@font-face {
  font-family: &#x27;Fira Code&#x27;;
  font-style: normal;
  font-display: swap;
  font-weight: 300 700;
  src: url(&#x27;./fonts/fira-code-var-latin.woff2&#x27;) format(&#x27;woff2&#x27;);
  unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA,
    U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215,
    U+FEFF, U+FFFD;
}
@font-face {
  font-family: &#x27;Inter&#x27;;
  font-style: normal;
  font-weight: 100 900;
  font-display: swap;
  src: url(&#x27;/fonts/inter-var-latin.woff2&#x27;) format(&#x27;woff2&#x27;);
  unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA,
    U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215,
    U+FEFF, U+FFFD;
}
</code></pre>
<h3>3. Preload the fonts</h3>
<p>If you don’t have <code>html.js</code>, check the process <a href="https://www.gatsbyjs.com/docs/custom-html/">here</a>. I prefer to only use <a href="https://github.com/nfl/react-helmet">react-helmet</a> for properties that differ from page to page, so anything else lives in <code>html.js</code> for me.</p>
<p>You&#x27;ll notice that I don’t include <code>Fira Code</code> here, as it’s not a critical resource.</p>
<pre><code class="language-html">&lt;link
  rel=&quot;preload&quot;
  href=&quot;/fonts/inter-var-latin.woff2&quot;
  as=&quot;font&quot;
  crossorigin=&quot;anonymous&quot;
  type=&quot;font/woff2&quot;
/&gt;
</code></pre>
<h3>4. Cache them hard</h3>
<p>I deploy my website in Netlify, and I use <a href="https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-plugin-netlify#readme">gatsby-plugin-netlify</a> to handle any configuration.</p>
<p>Here’s how I cache my fonts, forever.</p>
<pre><code class="language-js">{
  resolve: &#x27;gatsby-plugin-netlify&#x27;,
  options: {
    headers: {
      &#x27;/fonts/*&#x27;: [
        &#x27;Cache-Control: public&#x27;,
        &#x27;Cache-Control: max-age=365000000&#x27;,
        &#x27;Cache-Control: immutable&#x27;,
      ],
    },
  },
},
</code></pre>
<p>Otherwise, without the said plugin, a simple <code>_headers</code> file in the public folder will suffice.</p>
<pre><code class="language-txt">/fonts/*
  Cache-Control: public
  Cache-Control: max-age=365000000
  Cache-Control: immutable
</code></pre>
<h3>Final thoughts</h3>
<p>Before trying to install a font-related Gatsby plugin, why not consider this approach?</p>
<p>👋</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Working with images in Gatsby]]></title>
            <link>https://dnlytras.com/blog/gatsby-images</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/gatsby-images</guid>
            <pubDate>Thu, 24 Dec 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[Utilizing the latest, more flexible, and straightforward API]]></description>
            <content:encoded><![CDATA[<link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_350/v1667745212/blog-images/posts/gatsby-images/fluid_vljic6.jpg"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_350/v1667745211/blog-images/posts/gatsby-images/constrained_ihksil.jpg"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745211/blog-images/posts/gatsby-images/blurred_m9h11e.gif"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745213/blog-images/posts/gatsby-images/dominant_ebiwc2.gif"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745213/blog-images/posts/gatsby-images/traced_hdsrh8.gif"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745210/blog-images/posts/gatsby-images/blog-files_g2eseq.jpg"/><blockquote><strong>This is a very outdated post.</strong> I don&#x27;t use Gatsby anymore. Feel free to check something else instead.</blockquote>
<hr/>
<p>Working with images is boring. If you were to include an image on a website, you have to assure:</p>
<ul>
<li>That the images are properly resized for each screen</li>
<li>That we serve the right image based on the device pixel density</li>
<li>That we serve modern image formats when possible</li>
<li>That the images are compressed</li>
</ul>
<p>But also..</p>
<ul>
<li>That we don’t load all the page at once, consuming bandwidth, when the visitor might not even scroll to see most of the pages</li>
<li>That we don’t cause layout movements, as the images abruptly load</li>
</ul>
<p>Thankfully, Gatsby offers some nice utilities, bundling all the steps required into a single tool-chain, letting us focus on other things.</p>
<h2>To use GraphQL or not?</h2>
<p>Up to recently, if you didn’t want to go the GraphQL route, you were all out of luck. Thankfully, now it’s possible with the help of the <code>StaticImage</code> component. But let’s take a step back for a moment.</p>
<p>Gatsby is a fantastic framework if you want to consume data from various remote sources. Fetch the data (instagram, wordpress, etc), push them into the GraphQL layer, and populate your pages based on that data. To help with the images that these sources include, the Gatsby team &amp; collaborators added a set of utilities to optimize them. But for everything outside the GraphQL layer, there wasn’t anything.</p>
<p>That resulted in an unnecessary boilerplate. If you had a simple local image but wanted a cool blur effect, you had to go through GraphQL. That said, with the latest <code>gatsby-plugin-image</code> we have the solution, and we can safely go one way or the other depending on the use case.</p>
<ol>
<li>If the image comes from a remote source, you’re already using GraphQL</li>
<li>If the image is part of a collection, like the cover of a blog post, you will have an easier time with GraphQL</li>
<li>If the image is the 404 illustration, the image of a section, or something ephemeral, inlining is the best way to go.</li>
</ol>
<h3>Inlining images</h3>
<p>If your images don’t go through the Gatsby GraphQL layer, you can use <code>StaticImage</code></p>
<pre><code class="language-jsx">import {StaticImage} from &#x27;gatsby-plugin-image&#x27;;

// my actual 404
return (
  &lt;StaticImage
    alt=&quot;Man looking on the map&quot;
    className=&quot;border-b-4 border-gray-200&quot;
    layout=&quot;constrained&quot;
    width={400}
    src=&quot;../images/four-oh-four.png&quot;
  /&gt;
);
</code></pre>
<p><a href="https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-plugin-image#api-1">The same configuration object</a> that we declare in the GraphQL schema, can be passed as props. Rejoice.</p>
<h2>GraphQL Configuration</h2>
<h3>Dependencies</h3>
<p>First of all, we need to ensure we have all the dependencies in place.</p>
<p>We need 4 plugins before we start:</p>
<ul>
<li><a href="https://www.gatsbyjs.com/plugins/gatsby-source-filesystem/">gatsby-source-filesystem</a>, to make the images known the GraphQL data layer. You probably already have this installed.</li>
<li><a href="https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-plugin-image#readme">gatsby-plugin-image</a>, which exposes the <code>StaticImage</code> &amp; <code>GatsbyImage</code> components</li>
<li><a href="https://www.gatsbyjs.com/plugins/gatsby-plugin-sharp/">gatsby-plugin-sharp</a>, to bridge the gap between <a href="https://github.com/lovell/sharp">Sharp</a> and the rest of the plugins</li>
<li><a href="https://www.gatsbyjs.com/plugins/gatsby-transformer-sharp/">gatsby-transformer-sharp</a>, to manipulate the images using GraphQL queries</li>
</ul>
<pre><code class="language-js">{
  resolve: `gatsby-source-filesystem`,
  options: {
    path: `${__dirname}/src/images`,
    name: &#x27;images&#x27;,
  },
  `gatsby-plugin-sharp`,
  `gatsby-transformer-sharp`,
  `gatsby-plugin-image`,
},
</code></pre>
<h3>Types of images</h3>
<p>Now, this is where it gets interesting - we can have three types of responsive images.</p>
<ol>
<li>Images with fixed width. When knowing exactly how big the images should be. (<code>FIXED</code>)</li>
<li>Images that stretch across their fluid parent container. Completely dependent on their parent, who can take many shapes and forms between screen sizes. (<code>FULL_WIDTH</code>)</li>
<li>Images that stretch across their container but limited to a maximum width (<code>CONSTRAINED</code>)</li>
</ol>
<p>Ok, this might be confusing. The difference between <code>FULL_WIDTH</code> &amp; <code>CONSTRAINED</code> can be seen in the following table.
The <code>FULL_WIDTH</code> image will expand to fill its container, even if it looks blurred.</p>
<table><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_350/v1667745212/blog-images/posts/gatsby-images/fluid_vljic6.jpg" alt="FULL_WIDTH"/></td><td><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_350/v1667745211/blog-images/posts/gatsby-images/constrained_ihksil.jpg" alt="constrained"/></td></tr></tbody></table>
<hr/>
<p>So assuming we want a <code>CONSTRAINED</code> image and we don’t want image copies bigger than 200px, here’s our query.</p>
<pre><code class="language-js">query {
  image: file(relativePath: { eq: &quot;image.jpg&quot; }) {
    childImageSharp {
        gatsbyImageData(
          quality: 90
          width: 200
          layout: CONSTRAINED
        )
      }
    }
  }
</code></pre>
<p>This setting will make sure to include copies for both <code>jpg</code> and <code>webp</code>, even if we don’t specifically request for the latter.</p>
<h3>Placeholders</h3>
<p>Everything is in order, but we probably want a smooth fallback. Here are our options:</p>
<ul>
<li><code>BLURRED</code>: (default) a blurred, low-resolution image, encoded as a base64 data URI</li>
<li><code>TRACED_SVG</code>: a low-resolution traced SVG of the image</li>
<li><code>DOMINANT_COLOR</code>: a solid color, calculated from the dominant color of the image.</li>
<li><code>NONE</code>: no placeholder. Looks better with the <code>background</code> prop set.</li>
</ul>
<p>Here are the first three options side to side. I tend to prefer the first two.</p>
<table><thead><tr><th>Blurred</th><th>Dominant</th><th>Traced</th></tr></thead><tbody><tr><td><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745211/blog-images/posts/gatsby-images/blurred_m9h11e.gif" alt="blurred"/></td><td><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745213/blog-images/posts/gatsby-images/dominant_ebiwc2.gif" alt="dominant"/></td><td><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745213/blog-images/posts/gatsby-images/traced_hdsrh8.gif" alt="traced"/></td></tr></tbody></table>
<h3>Transforms</h3>
<p>Gatsby allows us to do some transforms too:</p>
<ul>
<li>grayscale</li>
<li>duotone</li>
<li>rotate</li>
<li>trim</li>
<li>cropFocus</li>
<li>fit</li>
</ul>
<p>Frankly, Gatsby is a bit notorious for its build times, so I would advise to do a pre-processing to avoid transforming the same images again and again. If that&#x27;s not possible, this option is for you.</p>
<h3>Consuming the images</h3>
<p>Contrary to the <code>StaticImage</code> example, <code>GatsbyImage</code> accepts an <code>image</code> prop.</p>
<pre><code class="language-jsx">import {StaticImage} from &#x27;gatsby-plugin-image&#x27;;

return (
  &lt;GatsbyImage
    alt={album.title}
    image={album.cover.childImageSharp.gatsbyImageData}
    className=&quot;shadow-sm&quot;
  /&gt;
);
</code></pre>
<p>Now writing <code>album.cover.childImageSharp.gatsbyImageData</code> is a bit tedious, so we can import <code>getImage</code> from the very same package, and refactor it as follows:</p>
<pre><code class="language-jsx">import {StaticImage, getImage} from &#x27;gatsby-plugin-image&#x27;;

return (
  &lt;GatsbyImage
    alt={album.title}
    image={getImage(album.cover)} // much better
    className=&quot;shadow-sm&quot;
  /&gt;
);
</code></pre>
<h3>Referencing images</h3>
<p>You probably want to fetch the images dynamically, as part of another entity. The best way for that is to link the images with the rest of the metadata. Here’s an example from my blog.</p>
<pre><code class="language-json">  {
    &quot;artist&quot;: &quot;Joy Division&quot;,
    &quot;title&quot;: &quot;Unknown Pleasures&quot;,
    &quot;releasedDate&quot;: 1979,
    &quot;rating&quot;: 5,
    &quot;cover&quot;: &quot;./images/unknown-pleasures.jpg&quot;,
    &quot;spotify&quot;: &quot;https://open.spotify.com/album/0cbpcdI4UySacPh5RCpDfo&quot;
  },
  {
    &quot;artist&quot;: &quot;King Crimson&quot;,
    &quot;title&quot;: &quot;In the Court of the Crimson King&quot;,
    &quot;releasedDate&quot;: 1969,
    &quot;rating&quot;: 5,
    &quot;cover&quot;: &quot;./images/in-the-court.jpg&quot;,
    &quot;spotify&quot;: &quot;https://open.spotify.com/album/5wec5BciMpDMzlEFpYeHse&quot;
  },
  {
    &quot;artist&quot;: &quot;Radiohead&quot;,
    &quot;title&quot;: &quot;OK Computer&quot;,
    &quot;releasedDate&quot;: 1997,
    &quot;rating&quot;: 5,
    &quot;cover&quot;: &quot;./images/ok-computer.jpg&quot;,
    &quot;spotify&quot;: &quot;https://open.spotify.com/album/7dxKtc08dYeRVHt3p9CZJn&quot;
  },
</code></pre>
<p>And the call with the rest of the metadata</p>
<pre><code class="language-js">export const query = GraphQL`
  {
    albums: allAlbumsJson(
      sort: {
        fields: [rating, badge, title, artist, releasedDate]
        order: DESC
      }
    ) {
      edges {
        node {
          id
          releasedDate
          title
          artist
          rating
          badge
          spotify
          cover {
            childImageSharp {
              gatsbyImageData(
                height: 200
                width: 200
                quality: 100
                layout: CONSTRAINED
                placeholder: DOMINANT_COLOR
              )
            }
          }
        }
      }
    }
  }
`;
</code></pre>
<h3>Images in markdown files</h3>
<p>Unfortunately, <code>gatsby-plugin-image</code> doesn’t help with markdown files. We can optimize any images we query along with our data, but the images which are referenced inside the markdown file, won’t be touched.</p>
<p>In order to do this, we have to include a separate plugin <a href="https://www.gatsbyjs.com/packages/gatsby-remark-images/">gatsby-remark-images</a>. Now to keep things tidy, I like to keep my blog images near the markdown file, and copy them over with <a href="https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-remark-copy-linked-files">gatsby-remark-copy-linked-files</a>.</p>
<p>Here’s how it looks</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_280/v1667745210/blog-images/posts/gatsby-images/blog-files_g2eseq.jpg" alt="blog-files"/></p>
<p>And here how it’s written</p>
<pre><code class="language-md">Here’s how it looks.

![blog-files](./blog-files.jpg)
</code></pre>
<hr/>
<p>Now having installed the plugins, we add some basic options and we&#x27;re ready to go.</p>
<pre><code class="language-js">{
  resolve: `gatsby-transformer-remark`,
  options: {
    plugins: [
      // .. rest of the plugins
      &#x27;gatsby-remark-copy-linked-files&#x27;,
      {
        resolve: `gatsby-remark-images`,
        options: {
          maxWidth: 900,
          quality: 90,
          withWebp: true,
        },
      },
    ],
  },
},
</code></pre>
<p>Not as polished as the <code>gatsby-plugin-image</code>, but it does the trick.</p>
<h2>Final thoughts</h2>
<p>At the time of writing <code>gatsby-plugin-image</code> is still in beta. That said, I’m using it because life is too short to not live on the edge.</p>
<p>If you want to follow the official documentation, you can find it <a href="https://www.gatsbyjs.com/docs/reference/built-in-components/gatsby-plugin-image/">here</a></p>
<p>👋</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[What is a Front-end Developer anyway?]]></title>
            <link>https://dnlytras.com/blog/what-is-a-front-end-developer-anyway</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/what-is-a-front-end-developer-anyway</guid>
            <pubDate>Sat, 26 Sep 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[JavaScript has shaken things up and we refuse to acknowledge it]]></description>
            <content:encoded><![CDATA[<p>The common answer is: <em>“Someone proficient in HTML-CSS-JS, who can convert PSD files to pixel perfect websites”</em>. Unfortunately, while this is a great description, we underestimate how big JavaScript has gotten.</p>
<p>A front-end developer utilizing a stack of
<code>React</code>, <code>TailwindCSS</code>, <code>Next.js</code>, and <code>Prisma</code> can ship an pretty good MVP quite fast. The JavaScript tools we have available are just too good. And they are getting better! That&#x27;s a fantastic time to be a developer.</p>
<hr/>
<p>During my career, I was expected to build apps with <code>React</code>, <code>Redux</code>, <code>Express</code>, <code>Webpack</code>, and then go build websites with responsive typography and smooth animations. Release <code>React Native</code> apps, and then spin off landing pages with accessible and semantic markup.</p>
<p>I struggled a lot, feeling that I can’t keep up. The more I focused on the “back-end” of the front-end, the more I felt out of touch with CSS. The more I wanted to master performance optimization, the more I distanced myself from creating kick-ass animations.</p>
<p>I felt that I was the one at fault until I realized that it’s impossible to cover everything. We are shoe-horning many aspects of web development into a single job title, that&#x27;s it’s impossible to ever feel that you master front-end.</p>
<h3>The confusion</h3>
<p><strong>Client-side !== Front-end</strong></p>
<p>We mistakenly label anything that runs in the browser (client-side) as &quot;Front-end&quot;. Some years ago, that would be true, as the code in the browser would only:</p>
<ol>
<li>Account for how the page looks &amp; feels</li>
<li>Handle the user interactions</li>
</ol>
<p>But technology evolves and we rightly demand more from our applications. Eventually, we delegated part of our business logic that would traditionally live in the &quot;Back-end&quot; (server-side), to browser-code.</p>
<p>We started introducing state-management libraries (<code>Redux</code>, <code>MobX</code>, <code>Zustand</code>), solutions to optimize the data flow (<code>Apollo Client</code>, <code>React-Query</code>), and got creative with our CSS (<code>Styled components</code>, <code>Fela</code>). Add <code>TypeScript</code>, <code>RxJS</code>, <code>Webpack</code>, <code>Docker</code> to the mix too. The front-end tool-chain got quite big.</p>
<p>This meant that the modern front-end developer should also be skilled in data structures, algorithms, and scalable architecture. Oh and functional programming.</p>
<p>This created a huge confusion for people who didn’t have a solid programming background, and only care about HTML, CSS, design &amp; animations. They got side-lined by essentially back-end developers whose primary programming language was JavaScript.</p>
<h3>The problem</h3>
<p>The modern Front-end Developer title is a jack of all trades, or even better a JavaScript Engineer. I feel blessed working across the stack, being part of an ecosystem that is always looking to disrupt. But this has some shortcomings:</p>
<ul>
<li>Junior developers have no idea where to start</li>
<li>Accessibility becomes an afterthought</li>
<li>Excellent professionals feel impostor syndrome due to the overwhelming context</li>
<li>Subpar front-end experiences. Consider serving a website holding critical info during emergencies with a 2MB JavaScript bundle</li>
</ul>
<p>With JavaScript taking over, we, unfortunately, have succumbed to mediocrity. We miss proper craftsmanship. For the modern front-end developer writing CSS &amp; HTML is a necessary evil. <em>Something easy</em>.</p>
<p>This became more apparent to me while reviewing candidate assignments. People know React and can reason about the state management options, but fail hard in CSS. Algorithmic skills matter more than proper templating.</p>
<p>And eventually:</p>
<ol>
<li>Those who don’t like JavaScript, are forced to adapt and follow along producing poor code.</li>
<li>While those who love JavaScript, assume what they know in the HTML/CSS-sphere is good enough, producing yet again poor code.</li>
</ol>
<blockquote><p>Two “front-end web developers” can be standing right next to each other and have little, if any, skill sets in common. That’s downright bizarre to me for a job title so specific and ubiquitous. I’m sure that’s already the case with a job title like designer, but front-end web developer is a niche within a niche already.</p><p><a href="https://css-tricks.com/the-great-divide/">The Great Divide</a></p></blockquote>
<h3>Should we keep calling ourselves Front-end Developers?</h3>
<p>I believe no. Having <u>UX Developers</u> &amp; <u>JavaScript Engineers</u> rings more true to my ears. Companies that operate on a large scale, have already some sort of distinction like that. Medium-sized start-ups not so much, they won’t hire a specialist.</p>
<p>But again this isn’t about what title to slap on your LinkedIn profile. We need to understand that just because JavaScript can be written everywhere now, that doesn’t make someone a front-end developer.</p>
<p>That &quot;good enough&quot; is almost always bad. Before no-code takes all of our jobs, let’s stop the gatekeeping, and accept that the mythical Front-end developer ninja died a long time ago.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Separating server cache & application state with React Query]]></title>
            <link>https://dnlytras.com/blog/data-fetching-with-react-query</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/data-fetching-with-react-query</guid>
            <pubDate>Sat, 18 Apr 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[UI state, ephemeral data & React Query]]></description>
            <content:encoded><![CDATA[<blockquote><strong>This is a very outdated post.</strong> Feel free to check something else instead.</blockquote>
<hr/>
<h3>The Problem</h3>
<p>There was a period where we included everything into our Redux stores. Maybe not all of you, but teams I worked with used to do it. That&#x27;s what we knew, and what Medium authors were preaching as good practice.</p>
<p>We had plenty of presentational components that displayed information we got from our server. The info we wanted to present would be duplicated across the application, so it made sense to centralize the data fetching implementation and update them all accordingly.</p>
<p>There is nothing wrong with the way I present it now. But ultimately, it gets overwhelming doing it for ever-growing applications. What was supposed to be an excellent solution for state-management, was now responsible for handling loading states and making remote calls.</p>
<p>We introduced <code>redux-thunk</code>, some of you might have even tried <code>redux-saga</code> and its approach with generators. So much more complexity.</p>
<p>And it’s not that we only had trouble keeping up with the remote data flow. The actual information that presents how UI state would be suppressed &amp; refactoring would be painful.</p>
<h3>What we needed</h3>
<p>There is no need to store what the server responds. If the <code>time-to-live</code> of the response is a couple of seconds, we don’t want that anywhere near our stores. The moment we get the response, we transform it accordingly and we feed it to the components.</p>
<p>And this is where <a href="https://github.com/tannerlinsley/react-query">React Query</a> comes into play.</p>
<p>In short:</p>
<ol>
<li>If the component wants remote data, React Query fetches them</li>
<li>It exposes the status progress so that the component can give proper feedback</li>
<li>Refetches the data or serves them from the cache, when needed again</li>
</ol>
<p>Let’s see it in action.</p>
<pre><code class="language-jsx">function Matches() {
  const {status, data, error} = useQuery(&#x27;matches&#x27;, getMatches);

  if (status === &#x27;loading&#x27;) {
    return &lt;Skeleton /&gt;;
  }

  if (status === &#x27;error&#x27;) {
    // Depends on your implementation
    return &lt;ErrorState error={error} /&gt;;
  }

  // something like this
  return (
    &lt;List&gt;
      {data.map((match) =&gt; (
        &lt;MatchesRow match={match} key={match.id} /&gt;
      ))}
    &lt;/List&gt;
  );
}
</code></pre>
<p>In the highlighted line, we use the <code>useQuery</code> hook. We tell React Query that we want to call the <code>getMatches</code> function, and associate the response with the key <code>matches</code>.</p>
<p>We don’t store the loading states or the response. Somehow, React Query handles all that and feeds the component just the right amount of data it needs.
We don’t care if the data are new or cached, or what happened. Only that we should prepare some view for the state of the remote call.</p>
<p>Any other component asking the same query/function combination will get the cached response. Isn’t this what we want? It’s ok, we can configure it.</p>
<p>What matters is that we have a solution that caches the ephemeral data from the server, and lets us know of the progress. We don’t include <code>isXLoading</code> flags in our stores, and we can be happy again.</p>
<hr/>
<p>What about passing params?</p>
<p>They are part of the key of course! Every little variation will get a cached entry of its own.</p>
<pre><code class="language-jsx">function Match({matchId}) {
  const {status, data, error} = useQuery([&#x27;matches&#x27;, matchId], getMatch);

  if (status === &#x27;loading&#x27;) {
    return &lt;Skeleton /&gt;;
  }

  if (status === &#x27;error&#x27;) {
    // Depends on your implementation
    return &lt;ErrorState error={error} /&gt;;
  }

  // something like this
  return &lt;Match match={match} /&gt;;
}
</code></pre>
<p>And here’s how the fetcher accepts them.</p>
<pre><code class="language-ts">export const getMatch = ({
  queryKey,
}: QueryFunctionContext&lt;[string, string]&gt;) =&gt; {
  const [_key, matchId] = queryKey;
  return axios.get(MATCH_ENDPOINT, {id: matchId}).then((data) =&gt; data.match);
};
</code></pre>
<blockquote>
<p>I don’t want to dive deep into the syntax accept of the library. It has excellent documentation &amp; vibrant community. <a href="https://react-query.tanstack.com/overview">Take a look</a>. There are plenty of goodies I’m not covering here.</p>
</blockquote>
<h3>Extract to a custom hook</h3>
<p>In this example we want the query to re-run when we change our state variables <code>matchStatus</code>, <code>enabledLeagues</code>, <code>orderBy</code>. There is no need to duplicate the code in every components, so abstracting to a custom hook is an excellent option.</p>
<p>Here’s an example:</p>
<pre><code class="language-jsx">import React from &#x27;react&#x27;;
import {useQuery} from &#x27;react-query&#x27;;
import {pick} from &#x27;lodash-es&#x27;;

import {useMatchContext} from &#x27;../state&#x27;;
import {getMatches} from &#x27;../api/queries&#x27;;

export const useMatchesQuery = ({queryProps = {}} = {}) =&gt; {
  const {state} = useAppContext();

  const {matchStatus, enabledLeagues, orderBy} = state;
  const queryVariables = {matchStatus, enabledLeagues, orderBy};

  // We&#x27;re using the alternative Object API here.
  // For more expressive calls, I find that it helps with readability
  const results = useQuery({
    queryFn: getMatches,
    queryKey: [&#x27;matches&#x27;, queryVariables],
    staleTime: Infinity,
    onError: (error) =&gt; {
      // some reporting if needed
    },
    // override default configuration if needed
    ...queryProps,
  });

  const {status, error} = results;
  const data = results?.matches || [];

  return {status, data, error};
};
</code></pre>
<p>So we can re-use the same query, without having to account for the key or the fetcher function.</p>
<pre><code class="language-jsx">function Matches() {
  const {status, data, error} = useMatchesQuery();

  /// ...
}
</code></pre>
<h3>Canceling requests</h3>
<p>Although we don’t need to cancel requests to avoid outdated content (different params, result in different queries), it is good practice to do, so that our server won’t have to take the extra load.</p>
<p>Here’s an example. By clicking <em>&quot;Next page&quot;</em> 5 times on a paginated table, we’ll run 5 queries. Thankfully we won’t have conflicts since all of them have distinct keys (due to the page number). Our loyal server will respond 5 times though.</p>
<p>So here’s the question. Do we value more having these 4 previous pages cached for the future, or minimizing the load for our server?</p>
<p>It depends. I prefer to cancel the requests as a general rule.</p>
<h3>How caching works</h3>
<p>Let’s take a closer look at the caching decision-making.</p>
<ul>
<li>A new instance of <code>useQuery(&#x27;matches&#x27;, getMatches)</code> mounts.<!-- -->
<ul>
<li>Since no other queries have been made with this query + variable combination, this query will show a hard loading state and make a network request to fetch the data.</li>
<li>It will then cache the data using &#x27;matches&#x27; and getMatches as the unique identifiers for that cache.</li>
<li>A stale invalidation is scheduled using the staleTime option as a delay (defaults to 0, or immediately).</li>
</ul>
</li>
<li>A second instance of <code>useQuery(&#x27;matches&#x27;, getMatches)</code> mounts elsewhere.<!-- -->
<ul>
<li>Because this exact data exist in the cache from the first instance of this query, that data is immediately returned from the cache.</li>
</ul>
</li>
<li>Both instances of the <code>useQuery(&#x27;matches&#x27;, getMatches)</code> query are unmounted and no longer in use.<!-- -->
<ul>
<li>Since there are no more active instances to this query, a cache timeout is set using cacheTime to delete and garbage collect the query (defaults to 5 minutes).</li>
</ul>
</li>
<li>No more instances of <code>useQuery(&#x27;matches&#x27;, getMatches)</code> appear within 5 minutes.<!-- -->
<ul>
<li>This query and its data are deleted and garbage collected.</li>
</ul>
</li>
</ul>
<p>So <code>Stale</code> queries are automatically refetched:</p>
<ul>
<li>Whenever their query keys change (this includes variables used in query key tuples),</li>
<li>When they are freshly mounted from not having any instances on the page,</li>
<li>Or when they are refetched via the query cache manually.</li>
</ul>
<p>I like to set the <code>stateTime</code> to a certain amount when I’m confident that the data won’t change soon.</p>
<h3>Afterthoughts</h3>
<p>After using React Query in an application that is primary using GET requests, I can say that it has helped immensely.
Orchestrating remote data fetching can cause a lot of boilerplate, and it makes sense to outsource it. This way we can focus more on writing features.</p>
<p>But what I never considered first, was how much it helps with refactoring. Migrating over to <code>GraphQL</code> endpoints is such a painless task. React-query doesn’t care what protocol or client you’re using.</p>
<p>Finally, you get to see your UI stores for what they are. I have heard testaments of people dropping their dedicated state management library in favor of simple Context. And it makes sense. Not it every case, but it’s a viable path.</p>
<p>I like React Query because it promotes less complexity and more confidence in the code.</p>
<p>Give it a spin 🙇‍♂️</p>
<blockquote>Resources:<ul>
<li><a href="https://react-query.tanstack.com/overview">React Query Documentation</a> (is crazy good)</li>
<li><a href="https://medium.com/better-programming/why-you-should-be-separating-your-server-cache-from-your-ui-state-1585a9ae8336">Why You Should Be Storing Remote Data in a Cache (and Not in State)</a></li>
</ul></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[When to use optimistic updates in your application]]></title>
            <link>https://dnlytras.com/blog/optimistic-updates</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/optimistic-updates</guid>
            <pubDate>Sun, 15 Mar 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[Reduce the number of spinners in your application by utilizing optimistic updates]]></description>
            <content:encoded><![CDATA[<link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/v1667745446/blog-images/posts/optimistic-updates/github-star_avsjp4.gif"/><p>Updating the UI optimistically is a pattern of persisting a new UI state, before getting the go-ahead approval of the server.</p>
<p>This pattern makes more sense for non-destructive parts of our interfaces. Things like liking a post, marking a tweet as favorite, or as seen below, starring a repository.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/v1667745446/blog-images/posts/optimistic-updates/github-star_avsjp4.gif" alt="star-gif"/></p>
<p>In the above scenario, the GitHub UI will mark the repository as &#x27;starred&#x27;, even if the backend hasn’t approved of the operation.</p>
<p>But it’s fine really! Operations like that are meant to be fast. To properly capture the sequence, I had to drop down to GPRS speed.</p>
<p>The other options would be:</p>
<ul>
<li>Display a loader and then persist the new state</li>
<li>Don’t display a loader, but cling on the previous state and update when the server responds.</li>
</ul>
<p>Both of these are suboptimal for such cases.</p>
<p>The first one can be very UX-unfriendly. Seeing small spinners for every single micro-interaction can be very tiring for the user.</p>
<p>The second one, makes the user second-guess if they clicked or not. But most importantly it gives the idea that the application is sluggish, buggy or both.</p>
<h3>Sometimes the user expects a delay.</h3>
<p>Think of toggling an article from private to public. By slowing things down, we&#x27;re letting the user know that the operation has been processed and the article is now live.</p>
<p>Even if the interaction is a simple switch, it’s an important action that the user knows to <em>expect</em> a delay.</p>
<p>They are not moving to other actions, but instead, they are waiting for a confirmation, and any immediate feedback will put them in disbelief.</p>
<h3>Choose wisely</h3>
<p><strong>Optimistic updates are meant for complementary actions.</strong></p>
<p>Here are some rules of thumb:</p>
<ul>
<li>The actions should be binary-like. (&quot;Liked&quot;, &quot;Starred&quot; or &quot;In saved searches&quot;)</li>
<li>The actions shouldn’t be tied to other parts of the interface.</li>
<li>The API response times should be extremely fast.</li>
<li>The API success ratio should be near 100%.</li>
</ul>
<p>We&#x27;re optimistic, but <del>not stupid</del> for a reason.</p>
<h3>Feedback</h3>
<p>Eventually, something will go wrong.</p>
<p>Reverting the original state should suffice. Remember we&#x27;re doing this because we know that the API will respond in less than a second, and it will almost always succeed. The user will notice the change.</p>
<p>Of course, we can also silently-retry or provide feedback. It depends on the context.</p>
<p>Be optimistic 🤞</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Following the hype in web development]]></title>
            <link>https://dnlytras.com/blog/following-the-hype</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/following-the-hype</guid>
            <pubDate>Sun, 01 Sep 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[The new things, the right things, and developer entertainment]]></description>
            <content:encoded><![CDATA[<p>Web development is a field that moves fast. The tools you worked with three years ago, might not be the hottest anymore. Which can lead to FOMO and the need to keep up with the latest trends.</p>
<p>So the question is, when should one adopt a new piece of technology?</p>
<p>Here&#x27;s a list of things to consider:</p>
<ul>
<li>Is it creating a better user experience?</li>
<li>Is it trimming down the bill?</li>
<li>Is it simplifying the code, and making it easier to maintain?</li>
</ul>
<p>If the answer to all is no, then you should probably not adopt it.</p>
<p>I had my fair share of technology FOMO. I&#x27;ve introduced libraries like <code>Redux</code>, and other to places where they didn&#x27;t belong. I regret it.</p>
<p>It&#x27;s very easy as a beginner to get excited about shiny stuff and fancy landing pages. As you start to learn more about the field, you realize that there are tradeoffs you didn&#x27;t consider. And in a lot of cases, the tradeoffs are not worth it.</p>
<p>To sum up, it&#x27;s good to be a later adopter. Let others face the <code>Next.js</code> warts and bugs. Give this new library a few months to mature. See if the community is still excited about it. Take your time, don&#x27;t rush into it anything.</p>
<p>Ship, ship, ship. That&#x27;s the most important thing.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Avoiding props drilling with React Context]]></title>
            <link>https://dnlytras.com/blog/avoiding-props-drilling</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/avoiding-props-drilling</guid>
            <pubDate>Sat, 04 May 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[Sensible state management with Context & Hooks]]></description>
            <content:encoded><![CDATA[<blockquote><strong>This is a very outdated post.</strong> Feel free to check something else instead.</blockquote>
<hr/>
<p>&#x27;Props drilling&#x27; is the process of passing down data to your components level after level.</p>
<p>You start simple with two child components. Then you have to add more features and inevitably the two original components have children of their own. You start moving your callbacks further and further down the component hierarchy, but ultimately the code works.</p>
<p>It doesn’t feel nice though since we have components that don’t care about most of their props, they just act as middlemen. To add more insult to injury:</p>
<ul>
<li>Refactoring is painful</li>
<li>Needs more work to avoid unnecessary re-renderings</li>
<li>There is a lot of code smell</li>
</ul>
<p>Here’s an example. We have an <code>ItemsList</code> component that semantically groups everything related to the list of our items.</p>
<pre><code class="language-jsx">// ...
const ItemsList = memo(
  ({items, selectedItems, handleSelect, handleDelete, handleSort}) =&gt; (
    &lt;div&gt;
      &lt;ItemsControl
        total={items.length}
        selectedTotal={selectedItems.length}
        handleDelete={handleDelete}
        handleSort={handleSort}
      /&gt;
      &lt;ItemsTable
        items={items}
        handleDelete={handleDelete}
        handleSelect={handleSelect}
      /&gt;
    &lt;/div&gt;
  )
);
</code></pre>
<p>Now the <code>ItemsTable</code> can be broken down in more components. And the <code>ItemRow</code> could be broken down in even more if the table cell contents are editable.</p>
<pre><code class="language-js">// ...
const ItemsTable = memo(({items, handleDelete, handleSelect}) =&gt; {
  return (
    &lt;table&gt;
      &lt;thead&gt;
        &lt;tr&gt;
          &lt;th&gt;Name&lt;/th&gt;
          &lt;th&gt;Quantity&lt;/th&gt;
          &lt;th&gt;Actions&lt;/th&gt;
        &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
        {items.map((item) =&gt; (
          &lt;ItemRow
            key={item.id}
            handleDelete={handleDelete}
            handleSelect={handleSelect}
          /&gt;
        ))}
      &lt;/tbody&gt;
    &lt;/table&gt;
  );
});
</code></pre>
<p>In medium to large applications, with an overload of information people resort to state management solutions like <a href="https://redux.js.org/">Redux</a> or <a href="https://mobx.js.org/">MobX</a>.</p>
<p>While we can use one of either, in our use case we don’t need a state-management library, as we are only about to use a small subset of their features.</p>
<p>Thankfully the React team in <a href="https://reactjs.org/blog/2018/03/29/react-v-16-3.html">16.3.0</a> updated their <a href="https://reactjs.org/docs/context.html">context API</a> that these libraries rely on.</p>
<p>What is a game-changer for me now, it’s the combination with the newly released <a href="https://reactjs.org/docs/hooks-overview.html">React Hooks</a>. It offers a greatly streamlined development experience, let’s dive in!</p>
<blockquote>Note: You can also Composition. Here’s a tweet demonstrating this approach <a href="https://twitter.com/mjackson/status/1195495535483817984">https://twitter.com/mjackson/status/1195495535483817984</a> - To be frank, I believe it’s easy to get out of hand, but works wonders for few levels of nesting.</blockquote>
<h3>Context + Hooks</h3>
<p>Alright, in this case, I’ll omit using the usual state pattern and take a page out of Redux’s book. Hooks introduced the <a href="https://reactjs.org/docs/hooks-reference.html#usereducer">useReducer</a> hook, and that works wonders since we can use the same <code>dispatch</code> callback all over our codebase.</p>
<blockquote>The “store” declaration is in the same file purely for demonstration purposes</blockquote>
<p>Let’s try to redo the previous snippet.</p>
<pre><code class="language-js">const initialState = {items: {}, selectedItems: {}};

const reducer = (state, action) =&gt; {
  switch (action.type) {
    case &#x27;DELETE_ITEMS&#x27;:
      return {...state, items: _.omit(state.items, action.payload.ids)};
    // ...
    default:
      return state;
  }
};

const AppContext = createContext(null);

export const useAppContext = () =&gt; useContext(AppContext); // expose the custom Hook

const App = () =&gt; {
  const [state, dispatch] = useReducer(reducer, initialState);

  // The Provider HOC is mandatory so  that the useAppContext can function
  return (
    &lt;AppContext.Provider value={[state, dispatch]}&gt;
      &lt;Component1 /&gt;
      &lt;Component2 /&gt;
      &lt;Component3 /&gt;
    &lt;/AppContext.Provider&gt;
  );
};
</code></pre>
<p>Now these <code>ComponentX</code> components, can be totally agnostic of the data that their children need.</p>
<p>Let’s move on to our nested <code>ItemRow</code> component. We only care about the dispatch handler for now so let’s just introduce this <code>handleDelete</code>. In this naive example, the moment we trigger the button, the component will be unmounted so we don’t care about extra memoization in the <code>handleDelete</code> callback.</p>
<p>The reason why <code>DELETE_ITEMS</code> expects an array of ids, is so that it can be reused when we have multi-row controls.</p>
<pre><code class="language-js">// ...
// Even better if you can use Webpack alias here
import {useAppContext} from &#x27;../../App&#x27;;

const ItemRow = memo(({id, name, description, quantity, dateAdded}) =&gt; {
  const [state, dispatch] = useAppContext();

  const handleDelete = () =&gt;
    dispatch({
      type: &#x27;DELETE_ITEMS&#x27;,
      payload: {ids: [id]},
    });

  return (
    &lt;tr&gt;
      &lt;td&gt;{name}&lt;/td&gt;
      &lt;td&gt;{description}&lt;/td&gt;
      &lt;td&gt;{quantity}&lt;/td&gt;
      &lt;td&gt;{dateAdded}&lt;/td&gt;
      &lt;td&gt;
        &lt;button onClick={handleDelete}&gt;Remove&lt;/button&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
  );
});
</code></pre>
<p>Now if we wanted to add more functionality in the above component, that wouldn’t be that hard!
Let’s add the row selection functionality. Previously, we could have to include the <code>handleSelection</code> callback in every component between <code>App</code> &amp; <code>ItemRow</code>. Now, we&#x27;ll stick to using <code>dispatch</code> with a different type.</p>
<p>Since we use hash-map instead of an array in our state, we can add this to our reducer</p>
<pre><code class="language-jsx">case &quot;TOGGLE_ITEM&quot;:
  const { id, value } = action.payload;
  return { ...state, selectedItems: { ...selectedItems, [id]: value } }
</code></pre>
<p>Adding the extra table column, as well as the <code>onClick</code> handler and we&#x27;re good!</p>
<pre><code class="language-js">// ...
const ItemRow = useMemo(({ id, name, description, quantity, dateAdded }) =&gt; {
  const [state, dispatch] = useAppContext();

  // The handleSelection function will cause rerenders, so let’s memoize this function
  const handleDelete = useCallback(
    () =&gt;
      dispatch({
        type: &#x27;DELETE_ITEMS&#x27;,
        payload: { ids: [id] },
      }),
    [id]
  );

  const isSelected = !!state.selectedItems[id]

  const handleSelection = useCallback(() =&gt;
    dispatch({
      type: &#x27;TOGGLE_ITEM&#x27;,
      payload: {
        id,
        value: !isSelected,
      },
    }), [id, isSelected]
  );

  return (
    &lt;tr&gt;
      &lt;td&gt;&lt;input type=&#x27;checkbox&#x27; checked={isSelected} onChange={handleSelection} /&gt;
      &lt;td&gt;{name}&lt;/td&gt;
      &lt;td&gt;{description}&lt;/td&gt;
      &lt;td&gt;{quantity}&lt;/td&gt;
      &lt;td&gt;{dateAdded}&lt;/td&gt;
      &lt;td&gt;
        &lt;button onClick={handleDelete}&gt;Remove&lt;/button&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
  );
});
</code></pre>
<p>In case you wondered why we include <code>dispatch</code> in the <code>useCallback</code> dependencies, that&#x27;s straight from the docs:</p>
<blockquote>
<p>React guarantees that dispatch function identity is stable and won’t change on re-renders. This is why it’s safe to omit from the useEffect or useCallback dependency list.</p>
</blockquote>
<h3>Final thoughts</h3>
<p>That is all.</p>
<p>React hooks are a breath of fresh air and I really enjoy utilizing them more and more. That being said, as we move on to functions rather than classes, it’s good to be extra precautious about caching all your expensive computations and callbacks.</p>
<p>The official <a href="https://reactjs.org/docs/hooks-reference.html">documentation</a> is probably the best resource out there, so that&#x27;s all you need.
I would strongly recommend using the <a href="https://www.npmjs.com/package/eslint-plugin-react-hooks#installation">accompanying</a> eslint plugin, which helps immensely.</p>
<p>As for the Context API, let’s keep in mind that it’s a simple way to tunnel data to components further down the component tree. It’s not a replacement for Redux, even if they tend to accomplish the same goals in small scale projects.</p>
<p>Redux might be using Context internally, but also enables great features like the DevTools plugin, &#x27;time-travel&#x27; and more. I would point out you to <a href="https://medium.com/@dan_abramov/you-might-not-need-redux-be46360cf367">You Might Not Need Redux</a> by Dan Abramov, because who better to reason about it, that him?</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Setting up Pi-hole]]></title>
            <link>https://dnlytras.com/blog/pi-hole</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/pi-hole</guid>
            <pubDate>Sat, 20 Apr 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[Blocking all ads in my local network]]></description>
            <content:encoded><![CDATA[<link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/etcher_b8iuum.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/step1_s8ymfe.jpg"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745621/blog-images/posts/pi-hole/step2_rm8rbg.jpg"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/step3_njmfmf.jpg"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/home_owcu8b.jpg"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/ip_gphykd.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/step4_tugadu.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745624/blog-images/posts/pi-hole/step5_pjttg6.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/screen1_ibtgi6.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/screen2_gnhyow.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/dashboard_pzkfwq.png"/><link rel="preload" as="image" href="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/dashboard2_x6zkqp.png"/><p><a href="https://pi-hole.net/">Pi-hole</a> has been on my radar for quite some time. After being overwhelmed by the ads on my mobile, I decided to give it a go. Pi-hole in short, blocks all ads on the DNS level.</p>
<p>For our computers, it’s easy to set up an adblocker.</p>
<p>What about a smart-TV though? How can one ensure that their purchase is ad-free? The fact that this is even reality is audacious, but let it be for now. We want to verify that our TVs are full of corny sitcoms and nothing else. In the case of mobiles, even if we can run adblocker (for example Firefox mobile) we can’t take the performance hit.</p>
<p><a href="https://pi-hole.net/">Reading from their website</a>...</p>
<blockquote>
<p>The Pi-hole® is a DNS sinkhole that protects your devices from unwanted content, without installing any client-side software.</p>
<ul>
<li><strong>Easy-to-install</strong>: our versatile installer walks you through the process, and takes less than ten minutes</li>
<li><strong>Resolute</strong>: content is blocked in non-browser locations, such as ad-laden mobile apps and smart TVs</li>
<li><strong>Responsive</strong>: seamlessly speeds up the feel of everyday browsing by caching DNS queries</li>
<li><strong>Lightweight</strong>: runs smoothly with minimal hardware and software requirements</li>
<li><strong>Robust</strong>: a command line interface that is quality assured for interoperability</li>
<li><strong>Insightful</strong>: a beautiful responsive Web Interface dashboard to view and control your Pi-hole</li>
<li><strong>Versatile</strong>: can optionally function as a DHCP server, ensuring all your devices are protected automatically</li>
<li><strong>Scalable</strong>: capable of handling hundreds of millions of queries when installed on server-grade hardware</li>
<li><strong>Modern</strong>: blocks ads over both IPv4 and IPv6</li>
<li><strong>Free</strong>: open source software which helps ensure you are the sole person in control of your privacy</li>
</ul>
</blockquote>
<h3>Setup</h3>
<p>My colleague Chris has <a href="https://www.balena.io/blog/deploy-network-wide-ad-blocking-with-pi-hole-and-a-raspberry-pi/">written a great tutorial</a> using <a href="https://balena.io">balena</a>. In my version, I’ll use <a href="https://raspbian.org/">Rasbian</a> instead.</p>
<ul>
<li>
<p>Raspberry Pi (&#x27;B&#x27; in my case)</p>
</li>
<li>
<p>SD card (I use a leftover SanDisk extreme 64GB)</p>
</li>
<li>
<p>Ethernet cable</p>
</li>
<li>
<p>Raspberry power adaptor</p>
</li>
<li>
<p><a href="https://www.balena.io/etcher/">Etcher</a> (In order to flash the Rasbian image)</p>
</li>
<li>
<p><a href="https://www.raspberrypi.org/downloads/raspbian/">Raspbian Stretch Lite</a></p>
</li>
<li>
<p>Your favorite SSH client (I’ll use macOS terminal)</p>
</li>
</ul>
<h3>Preparing the Raspberry Pi</h3>
<p>First, let’s get the <a href="https://www.raspberrypi.org/downloads/raspbian/">Raspbian Lite</a> image, and then <a href="https://www.balena.io/etcher/">Etcher</a> in order to flash it.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/etcher_b8iuum.png" alt="etcher"/></p>
<p>If you’re an owner of a MacBook Pro with two USB-C ports like me, get your dongles ready.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/step1_s8ymfe.jpg" alt="step1"/></p>
<p>When the flashing is complete, we have to enable SSH on the device. That&#x27;s very simple, as we only have to create an &#x27;shh&#x27; file in the root directory.</p>
<pre><code class="language-bash"># &#x27;cd&#x27; in /Volumes/{name-of-the-media}

touch ssh
</code></pre>
<p>Great, now let’s get these two lovebirds together, and plug the Pi in the router I decided to use Ethernet for steady connection.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745621/blog-images/posts/pi-hole/step2_rm8rbg.jpg" alt="step2"/></p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/step3_njmfmf.jpg" alt="step3"/></p>
<p>Nothing out of order here</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/home_owcu8b.jpg" alt="home"/></p>
<h3>Preparing the installation</h3>
<p>Cool beans. Now we need to get the IP address of the Pi. Simply log in to your router&#x27;s admin panel and check the connected devices.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745620/blog-images/posts/pi-hole/ip_gphykd.png" alt="ip"/></p>
<p>Alright, let’s SSH into our Rasbian image <code>pi@{ip-of-the-device}</code>. The password is <code>raspberry</code> (use <code>passwd</code> to change it).</p>
<p>Before we continue, let’s bring our image up to date</p>
<pre><code class="language-bash">sudo apt-get update
sudo apt-get dist-upgrade
</code></pre>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/step4_tugadu.png" alt="step4"/></p>
<p>and then run the install command</p>
<pre><code class="language-bash">curl -sSL https://install.pi-hole.net | bash
</code></pre>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745624/blog-images/posts/pi-hole/step5_pjttg6.png" alt="step5"/></p>
<p>and you will see this lovely image. The pink screen of progress.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/screen1_ibtgi6.png" alt="screen1"/></p>
<p>Really now it’s up to you. Follow along with the wizard, and pick whatever suits your use case. When everything is said and done, we can go check the dashboard!</p>
<p>Remember to change the password before quitting the terminal session with <code>pihole -a -p</code></p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/screen2_gnhyow.png" alt="screen2"/></p>
<h3>Dashboard</h3>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/dashboard_pzkfwq.png" alt="dashboard"/></p>
<p>I love this already. I’m a sucker for graphs, even if I don’t understand a thing.</p>
<p>Let’s get some extra lists from <a href="https://blocklist.site/app/">blocklist</a></p>
<pre><code class="language-sh">https://blocklist.site/app/dl/crypto
https://blocklist.site/app/dl/drugs
https://blocklist.site/app/dl/fraud
https://blocklist.site/app/dl/fakenews
https://blocklist.site/app/dl/gambling
https://blocklist.site/app/dl/malware
https://blocklist.site/app/dl/phishing
https://blocklist.site/app/dl/porn
https://blocklist.site/app/dl/proxy
https://blocklist.site/app/dl/ransomware
https://blocklist.site/app/dl/redirect
https://blocklist.site/app/dl/scam
https://blocklist.site/app/dl/spam
https://blocklist.site/app/dl/torrent
https://blocklist.site/app/dl/tracking
https://blocklist.site/app/dl/facebook
https://blocklist.site/app/dl/youtube
</code></pre>
<p>and place them in <code>Settings/Blocklists</code>.</p>
<p><img src="https://res.cloudinary.com/ds9pd4ywd/image/upload/w_760/v1667745622/blog-images/posts/pi-hole/dashboard2_x6zkqp.png" alt="dashboard2"/></p>
<p>I&#x27;ve also scanned through Reddit and played around with various others like:</p>
<pre><code class="language-sh">https://v.firebog.net/hosts/Easyprivacy.txt
https://v.firebog.net/hosts/Prigent-Ads.txt
https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-blocklist.txt
https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.2o7Net/hosts
</code></pre>
<h3>Caveats</h3>
<ul>
<li>Make sure you have a static IP for the Raspberry Pi. Modern routers are smart enough to assign the same IP, but let’s be extra careful.</li>
<li>We have to be up-to-date with the block lists. For smart TVs with 0 protection that’s good enough. But for your other devices, if you can add another level of protection like <a href="https://www.obdev.at/products/littlesnitch/index.html">Little Snitch</a> that would be great.</li>
</ul>
<h3>Moving on</h3>
<p>Well now, it’s up to both of us to use it more.
I’m sure there must be some false positives, so with everyday use, these issues should be ironed out.</p>
<p>I’ll keep the Pi as my DNS server for my own laptop &amp; mobile, but let my work-related one talk with the router directly. Better not mess with meetings for such an experiment :)</p>
<h3>Resources</h3>
<p>I’m not a clever man, I used <a href="https://www.myhelpfulguides.com/2018/07/15/install-pi-hole-raspbian-lite/">this guide</a> and <a href="https://www.reddit.com/r/pihole/">the dedicated subreddit</a> for help.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[The impostor syndrome]]></title>
            <link>https://dnlytras.com/blog/the-impostor-syndrome</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/the-impostor-syndrome</guid>
            <pubDate>Thu, 04 Apr 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[The irrational fear of being exposed as a fraud]]></description>
            <content:encoded><![CDATA[<p>When can I say that I’m proficient in React or JavaScript?</p>
<p>How much TypeScript should I have written over the past months to consider it an <em>&#x27;active&#x27;</em> skill? Do side projects count?</p>
<p>Am I good because I can get those daily tasks done? - what&#x27;s the definition of a <em>good</em> developer? Would I be rubbish for Amazon?</p>
<hr/>
<p>For a part of my career, I was a coding monkey for Greek startups with zero guidance. None of my colleagues had more than 2 years worth of solid experience in the field. My deadlines were the evenings and testing was something the client did on production.</p>
<p>Under these circumstances, it’s very hard to pick up good practices along the way. You either work your ass off on your own time, or you’re stuck. It was very hard to change my mindset from <em>&quot;adding this quick fix, and maybe one day...&quot;</em>, to <em>&quot;let’s tackle the root cause, and make sure it won&#x27;t happen again&quot;</em>.</p>
<p>And yet here I am. I’m working with brilliant people all across the globe, having learned a ton and still feel like a fraud.</p>
<p>It happens, I&#x27;m not a fraud. I worked for it.</p>
<blockquote><p>Impostor syndrome (also known as impostor phenomenon, impostorism, fraud syndrome or the impostor experience) is a psychological pattern in which an individual doubts their accomplishments and has a persistent internalized fear of being exposed as a &quot;fraud&quot;.</p><p>Despite external evidence of their competence, those experiencing this phenomenon remain convinced that they are frauds, and do not deserve all they have achieved.</p><p>Individuals with impostorism incorrectly attribute their success to luck, or as a result of deceiving others into thinking they are more intelligent than they perceive themselves to be.</p><p><a href="https://en.wikipedia.org/wiki/Impostor_syndrome">Wikipedia</a></p></blockquote>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Using Git rebase]]></title>
            <link>https://dnlytras.com/blog/updating-branch</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/updating-branch</guid>
            <pubDate>Sun, 10 Jun 2018 00:00:00 GMT</pubDate>
            <description><![CDATA[Updating and rebasing a branch]]></description>
            <content:encoded><![CDATA[<p>Back to when I started using Git, I was faced with this challenge.
Someone has updated the upstream - how do I include the latest changes into my working branch?</p>
<p>The first approach is to pull <code>master</code> and merge it into the working branch:</p>
<pre><code class="language-txt">git pull origin master
</code></pre>
<p>Unfortunately, this way we&#x27;re creating a new commit. What we really want is the following:</p>
<ol>
<li>Isolate our changes</li>
<li>Get the latest upstream updates</li>
<li>Apply the changes on top</li>
<li>Resolve any conflicts</li>
</ol>
<p>So we don’t want to merge our branch in the updated upstream but <em>rebase</em> on top of it.</p>
<hr/>
<p>First, let’s fast-forward the local master branch.</p>
<pre><code class="language-txt">git checkout master
git pull
</code></pre>
<p>Now let’s head back to our working branch and do a rebase. All of our commits will be applied on top of the updated master</p>
<pre><code class="language-txt">git checkout working_branch
git rebase origin master
</code></pre>
<blockquote><p>You can completely ignore updating your local master branch and simply run <code>git rebase origin/master</code>. I like having my local master up to date so I opt for the first case.</p></blockquote>
<p>At this point, we might have a few conflicts. We have a few options to deal with them:</p>
<ol>
<li><code>git rebase --abort</code> will completely stop the rebase. The branch&#x27;s initial state will be restored and we can start over. Nothing changed.</li>
<li><code>git rebase --skip</code> will ignore the local commit that causes the problem. This is useful if the commit is not relevant anymore or if it’s a mistake.</li>
<li>or fix the conflicts, stage them and keep the rebase going with <code>git rebase --continue</code></li>
</ol>
<h3>Merging the branch</h3>
<p>Having finished the work, it’s always a nice idea to &#x27;squash&#x27; commits that that are not relevant in isolation. For example, if you have a few commits that are just fixing typos or minor CSS tweaks, it’s better to merge them into a single commit.</p>
<pre><code class="language-txt">For this reason, doing an &#x27;Interactive&#x27; rebase is really helpful

```txt title=&quot;Terminal&quot;
git rebase -i master
</code></pre>
<p>Here’s an example of an interactive rebase prompt:</p>
<pre><code class="language-txt">pick 07c6add Include password strength indicator in registration formm
pick d29b1fb Fix the regex
pick 317tt36 Minor typo
pick fa2fff3 CSS tweaks

# Rebase 44a7ee6..fa2fff3 onto 44a7ee6
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like &quot;squash&quot;, but discard this commit’s log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
</code></pre>
<p>Now you can do any of the following</p>
<ol>
<li><strong>Pick</strong> - Keep the commit and move on to the next one</li>
<li><strong>Reword</strong> - Keep the commit, and get a prompt to update the message</li>
<li><strong>Edit</strong> - Keep the commit, but wait for you to update files and include them before moving on</li>
<li><strong>Squash</strong> - Merge into the commit above, and get a prompt to update the message</li>
<li><strong>Fixup</strong> - Same as <code>Squash</code> but it won’t prompt for a new message</li>
<li><strong>Exec</strong> - Allows running shell commands against a commit</li>
</ol>
<blockquote>You can reorder these commits around if needed.</blockquote>
<p>For this case, we&#x27;ll:</p>
<ol>
<li>Reword and keep the first commit</li>
<li>Squash the rest and discard the commit message by doing a <code>Fixup</code></li>
</ol>
<pre><code class="language-txt">r 07c6add Include password strength indicator in registration formm
f d29b1fb Fix the regex
f 317tt36 Minor typo
f fa2fff3 CSS tweaks
</code></pre>
<p>As a result our branch will contain a single meaningful commit on top of all the latest upstream changes.</p>
<p>Finally, we can push the branch to the remote repository and enjoy an up to date branch.</p>
<pre><code class="language-txt">git push origin working_branch
</code></pre>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Error handling in Vue 2 and Vuex]]></title>
            <link>https://dnlytras.com/blog/vue-error-handling</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/vue-error-handling</guid>
            <pubDate>Thu, 11 Jan 2018 00:00:00 GMT</pubDate>
            <description><![CDATA[Handling errors gracefully using async & await]]></description>
            <content:encoded><![CDATA[<blockquote>In this article we&#x27;re using Vue 2. I can’t guarantee that what&#x27;s referred here is considered to be a good practice anymore, as I’m mostly involved with React these days.</blockquote>
<h3>Demo</h3>
<iframe src="https://codesandbox.io/embed/91132rm8kp?fontsize=12&amp;hidenavigation=1&amp;module=%2Futils%2Fhelpers.js" style="width:100%;height:500px;border:0;border-radius:4px;overflow:hidden" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
<p>I don&#x27;t enjoy writing the error handling in JavaScript. I find it really annoying to have to wrap everything in a try-catch block. Especially for async calls. Let&#x27;s see if we can make it a bit easier.</p>
<p>Let’s start by using a call that may fail.</p>
<pre><code class="language-js">const requestUsers = () =&gt; fetch(&#x27;https://jsonplaceholder.typicode.com/users&#x27;);
</code></pre>
<p>Normally we would add a try-catch block around the call to handle any error.</p>
<pre><code class="language-js">const doTheCall = async () =&gt; {
  try {
    const users = await requestUsers();
  } catch (e) {
    // something with the error
  }
};
</code></pre>
<p>Instead, let&#x27;s create a wrapper function that will handle the error for us.</p>
<pre><code class="language-js">export const wrapRequest =
  (fn) =&gt;
  (...params) =&gt;
    fn(...params)
      .then((response) =&gt; {
        if (!response.ok) {
          throw response;
        }
        return response.json();
      })
      .catch((error) =&gt; handleError(error));
</code></pre>
<p>For this example let&#x27;s write a simple function that will return the error message.</p>
<pre><code class="language-js">import store from &#x27;./store&#x27;;

const handleError = (error) =&gt; {
  const errorStatus = error ? error.status : error;
  const errorMessage = prepareErrorMessage(errorStatus);
  store.dispatch(&#x27;populateErrors&#x27;, errorMessage);
};
</code></pre>
<h3>Vuex Setup</h3>
<p>Let&#x27;s set up a Vuex store..</p>
<pre><code class="language-js">import Vue from &#x27;vue&#x27;;
import Vuex from &#x27;vuex&#x27;;

import errors from &#x27;./_errors.js&#x27;;
import users from &#x27;./_users.js&#x27;;
import loader from &#x27;./_loader.js&#x27;;

Vue.use(Vuex);

export default new Vuex.Store({
  modules: {
    errors,
    users,
    loader,
  },
});
</code></pre>
<p>And a Users module to handle the users list.</p>
<pre><code class="language-js">import {wrappedRequestUsers} from &#x27;../requests&#x27;;

const state = {
  usersList: [],
};
const getters = {
  usersList: (state) =&gt; state.usersList.length,
};
const mutations = {
  usersListSet: (state, list) =&gt; (state.usersList = list),
  updateLoader: (state, status) =&gt; (state.loading = status),
};
const actions = {
  requestUsers: async ({commit}) =&gt; {
    const data = await wrappedRequestUsers();
    if (data) commit(&#x27;usersListSet&#x27;, data);
  },
  clearUsersList: ({commit}) =&gt; {
    commit(&#x27;usersListSet&#x27;, []);
  },
};

export default {
  state,
  getters,
  mutations,
  actions,
};
</code></pre>
<p>As for the error handling actions, we will push the new error message in the state</p>
<pre><code class="language-js">const state = {
  errors: [],
};

const getters = {
  errors: (state) =&gt; state.errors,
};

const mutations = {
  addError: (state, error) =&gt; state.errors.unshift(error),
  popError: (state) =&gt; state.errors.pop(),
};

const actions = {
  populateErrors: ({commit}, error) =&gt; {
    commit(&#x27;addError&#x27;, error);
    setTimeout(() =&gt; commit(&#x27;popError&#x27;), 3000);
  },
};

export default {
  state,
  getters,
  mutations,
  actions,
};
</code></pre>
<p>And the custom toast component will simply loop through every error message</p>
<pre><code class="language-vue">&lt;template&gt;
  &lt;div class=&quot;error-wrapper&quot;&gt;
    &lt;transition-group name=&quot;fade&quot; tag=&quot;div&quot;&gt;
      &lt;div class=&quot;error&quot; v-for=&quot;(error, index) in errors&quot; :key=&quot;index&quot;&gt;
        {{ error }}
      &lt;/div&gt;
    &lt;/transition-group&gt;
  &lt;/div&gt;
&lt;/template&gt;

&lt;script&gt;
import {mapGetters} from &#x27;vuex&#x27;;

export default {
  name: &#x27;errorToast&#x27;,
  computed: {
    ...mapGetters([&#x27;errors&#x27;]),
  },
};
&lt;/script&gt;

&lt;style&gt;
.fade-enter-active,
.fade-leave-active {
  transition: opacity 0.5s;
}
.fade-enter,
.fade-leave-to {
  opacity: 0;
}
.error-wrapper {
  position: absolute;
  top: 0;
  right: 0;
  .error {
    background: #cc0000;
    border-radius: 8px;
    color: #fff;
    margin-top: 1em;
    padding: 0.5em 2em;
  }
}
&lt;/style&gt;
</code></pre>
<h3>Use a spinner</h3>
<p>Sometimes, we have to display a spinner. For this case, I’ve made a separate module for the loading instance.
When a loader is needed, we won’t call the action directly, but instead we’ll dispatch <code>executeWithLoader</code> with the action name as a param.</p>
<pre><code class="language-js">const state = {
  loading: 0,
};
const getters = {
  loading: (state) =&gt; state.loading &gt; 0,
  loadingStatus: (state) =&gt; (state.loading &gt; 0 ? &#x27;Fetching stuff&#x27; : &#x27;Ready&#x27;),
};
const mutations = {
  updateLoader: (state, loading) =&gt;
    (state.loading = loading ? state.loading++ : state.loading--),
};
const actions = {
  executeWithLoader: async ({commit, dispatch}, fn) =&gt; {
    commit(&#x27;updateLoader&#x27;, true);
    await dispatch(fn, {root: true});
    commit(&#x27;updateLoader&#x27;, false);
  },
};

export default {
  state,
  getters,
  mutations,
  actions,
};
</code></pre>
<pre><code class="language-vue-html">&lt;button
  @click=&#x27;executeWithLoader(&quot;requestUsers&quot;)&#x27;
  :disabled=&quot;loading&quot;
  class=&quot;button button--success&quot;
&gt;
  Fetch users
&lt;/button&gt;
</code></pre>
<p>There you have it. We have a simple way to handle errors in our Vue app.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Your application won’t always run on MacBook]]></title>
            <link>https://dnlytras.com/blog/not-always-macbook</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/not-always-macbook</guid>
            <pubDate>Thu, 01 Jun 2017 00:00:00 GMT</pubDate>
            <description><![CDATA[A lesson in user experience]]></description>
            <content:encoded><![CDATA[<p>One of my first projects as a junior front-end developer was a fast food delivery app.</p>
<p>When I showed my father the app I built, he was excited for me and he wanted to try it. Unfortunately, he uses an old Dell 620. He really likes the computer&#x27;s form factor and it works fine for him. Why would he upgrade?</p>
<p>Then I saw him trying to use the app. He was confused, and he didn&#x27;t know how to navigate through the app. The performance was also horrible.  I blocked the DOM multiple times, and overworked his CPU while parsing the entire food catalog twice. That&#x27;s the result of inexperience.</p>
<p>Now when implementing a feature I like to put a face on the average customer, the face of my father. It puts things into perspective and throws out of the window every half-ass implementation I would normally feel alright with.</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
        <item>
            <title><![CDATA[Experimenting with PostCSS]]></title>
            <link>https://dnlytras.com/blog/postcss-you-beauty</link>
            <guid isPermaLink="false">https://dnlytras.com/blog/postcss-you-beauty</guid>
            <pubDate>Sun, 30 Apr 2017 00:00:00 GMT</pubDate>
            <description><![CDATA[Learning PostCSS and making a plugin while at it]]></description>
            <content:encoded><![CDATA[<blockquote><p>This was my first attempt to write a post.<br/>I haven&#x27;t updated it out of respect to the struggling new developer trying to find his voice.</p></blockquote>
<hr/>
<p>These past few weeks I’ve been messing around with PostCSS. Everything about the said ecosystem is astounding and rightly so. Before we go any further, think of PostCSS as a preprocessor like your favourite one. Technically it’s not, but let’s take one step at a time.</p>
<p>First of all, what is it that makes the PostCSS’ approach much more attractive? Saving you some time, the answer is it’s modular approach, allowing the user to tweak the dependencies as he/she sees fit. PostCSS standalone, does nothing. It’s up to you to include everything that benefits your development process. It’s essentially a tool, waiting to feed your awesome css to a number of javascript plugins.</p>
<p>SASS and LESS are monolithic. You get everything, batteries included, and with that thousands of lines of code you might not need. While it’s great to have a black box that just works, as a developer I prefer an ecosystem where I can use -among others- my own, tailor-made plugins. And this folks, is the biggest sell of PostCSS. You can build ground up your CSS development process, exactly like you want.</p>
<p>Another distinct difference is that every pre-processor has it’s own idiomatic syntax, limited to it’s own use. Essentially you write code inside css. Why not just write CSS separately and have JavaScript do it’s ES6 magic afterwards? JavaScript isn’t going anywhere, can LESS say the same?</p>
<h3>How it Works</h3>
<p>Alright, let’s take the example of <a href="https://github.com/postcss/autoprefixer">Autoprefixer</a>, the most famous PostCSS plugin. Minding your own business, you decide to align side by side these cat facts paragraphs. Something seems off and you notice that they don’t have the same height. It’s expected, but it would be nice to arrange them in a way that shadows or borders wouldn’t seem silly.</p>
<p>One dirty but effective approach is the following:</p>
<pre><code class="language-css">.column-wrapper {
  overflow: hidden;
}

.column-wrapper .column {
  margin-bottom: -9999px;
  padding-bottom: 9999px;
  /* and whatever grid css rules you want */
}
</code></pre>
<p>I shiver just by looking at it. But thanks to Flexbox we have a great tool in our hands to solve such problems. As expected though, in order for a Flexbox solution to work in all modern browsers, we have to use some prefixes,</p>
<pre><code class="language-css">.element {
  box-shadow: 0 0 2px 1px #797979;
  display: flex;
}
</code></pre>
<p>Now, what we should do is include the whole -moz and -webkit spam. While it might be alright for a small app, in more ambitious ones we should search for alternatives.</p>
<p>In SASS you would create a mixin and have it add include the prefixes. A simple <code>include</code> doesn’t seem much, but it’s still suboptimal. What if in the following months, a prefix or two is no longer needed? Consider the box shadow case. Firefox &amp; Chrome don’t need prefixes for shadows, but SASS would add them anyway.</p>
<p>The ideal case is to omit any prefix. Autoprefixer will parse the css code and add itself only the necessary ones. The sole thing you have to do is to point out which browser versions you support. Consulting the <a href="caniuse.com" target="_blank">caniuse</a> database, Autoprefixer will take care of the rest.</p>
<p>How is that possible? Earlier I said that PostCSS does nothing without plugins. I lied. It does one thing, which is to tranform your css code to Abstract Syntax Tree (AST). A JSON like data presentation, that every plugin will parse and modify.
When everything is said and done, the output is stringified and ready for production.</p>
<p>Play around <a href="http://astexplorer.net/#/2uBU1BLuJ1">this example</a> to get a better grasp of the whole AST transformation.</p>
<h3>Notable Plugins</h3>
<p>There are 200+ plugins out there, so it’s hard to list the &#x27;best&#x27; ones. What I can do though, is to post my postcss configuration for this very site.</p>
<pre><code class="language-js">const rucksack = require(&#x27;rucksack-css&#x27;);
const lost = require(&#x27;lost&#x27;);
const cssnext = require(&#x27;postcss-cssnext&#x27;);
const atImport = require(&#x27;postcss-import&#x27;);
const autocorrect = require(&#x27;postcss-autocorrect&#x27;);
const magician = require(&#x27;postcss-font-magician&#x27;);

exports.modifyWebpackConfig = function (config) {
  config.merge({
    postcss: [
      atImport(),
      magician({
        variants: {
          Raleway: {
            200: [],
            400: [],
            500: [],
          },
        },
      }),
      autocorrect(),
      rucksack(),
      cssnext({
        browsers: [&#x27;&gt;1%&#x27;, &#x27;last 2 versions&#x27;],
      }),
      lost(),
    ],
  });

  return config;
};
</code></pre>
<p>CSSnext allows me to write CSS specifications not quite supported yet by any browser. Variables, Nesting, etc, will be available sooner or later. So why not use them today?</p>
<p>Font Magician is my favourite. When it came to decide which font to use, i wrote the following code:</p>
<pre><code class="language-css">body {
  /*font-family: &#x27;Montserrat&#x27;;*/
  /*font-family: &#x27;Josefin Sans&#x27;;*/
  /*font-family: &#x27;Inconsolata&#x27;;*/
  /*font-family: &#x27;Fira Mono&#x27;;*/
  /*font-family: &#x27;Roboto&#x27;;*/
  font-family: &#x27;Raleway&#x27;;
}
</code></pre>
<p>Hot reloading allowed me to change the font-family without any effort and check 6 fonts under 30 seconds. It’s amazing. The site is extremely lightweight, so I can get away with Google fonts. If you&#x27;d rather use self-hosted fonts, Font Magician can help you out in this case too.</p>
<p>Rucksack is a collection of very handy plugins. It’s responsive font-size plugin is pure awesomeness. I used to do the following in order to get responsive typography:</p>
<pre><code class="language-css">body {
  font-size: calc(0.5em + 1vw);
}
</code></pre>
<p>But with rucksack, I can just throw the following lines of code and be done with it</p>
<pre><code class="language-css">body {
    /* cssnext &amp; font magician lines */
    background-color: var(--secondary-color);
    color: var(--primary-color);
    font-family: &quot;Raleway&quot;
    /* rucksack */
    font-size: responsive 14px 18px;
}
</code></pre>
<p>Of course you can set some breakpoints for the upper and lower limits. I like it because you can have global responsive typography but also target other elements in a nicer manner. I don’t really like calc too much, and em units can be tricky from time to time. If you want to have some specific rules about a call-to-action section for example, just do this and avoid pixels and breakpoints.</p>
<pre><code class="language-css">&amp; .call-to-action {
  font-size: responsive 18px 22px;
}
</code></pre>
<p>Lost is a nice to have plugin when you don’t want to include some css grid framework. I don’t use any in my side projects, so it saves some lines of code.</p>
<p>Finally <a href="https://github.com/dimitrisnl/postcss-autocorrect">postcss-autocorrect</a> is a plugin I made.</p>
<p>There are moments when I make a typo and i wonder why nothing changed. In this plugin, I correct these typos, the flow continues and there is a warning in the console for the user. That&#x27;s all.</p>
<h3>Making a plugin</h3>
<p>Boilerplates are great, so clone <a href="https://github.com/postcss/postcss-plugin-boilerplate">this repo</a> to quickstart the project.</p>
<p>Every bit of logic will be placed in index.js. Nothing fancy, we get the options, if any, and parse the AST. Business as usual.</p>
<pre><code class="language-js">var postcss = require(&#x27;postcss&#x27;);

module.exports = postcss.plugin(&#x27;PLUGIN_NAME&#x27;, function (opts) {
  opts = opts || {};

  // Work with options here

  return function (root, result) {
    // Transform CSS AST here
  };
});
</code></pre>
<p>For our showcase, let’s do something simple. Say we have the following code:</p>
<pre><code class="language-css">.link {
  text-decoration: none;
  color: red;
  @disable #efefef;
}
</code></pre>
<p>We will transform the <code>@disable #efefef</code> to:</p>
<pre><code class="language-css">.link {
  text-decoration: none;
  color: red;
  &amp; .disabled {
    color: #efefef;
  }
}
</code></pre>
<p>Note that the <code>&amp;</code> syntax is the way CSSnext works with nesting. Read more about the specification <a href="http://tabatkins.github.io/specs/css-nesting/">here</a>.</p>
<p>Alright, lets get our hands dirty. Follow each step <a href="http://astexplorer.net/#/gist/3d4a9a3b8857c3ec7aaa3c85ac35037a/0c780a1b9b6f6e83eff62a4b5ed7416c9e977992">here</a>. Developing in this platform is much more easy, since you can check AST any time. Babel is also included, so we can modify our code a bit like this:</p>
<pre><code class="language-js">import * as postcss from &#x27;postcss&#x27;;

export default postcss.plugin(&#x27;postcss-disable-annotation&#x27;, (options = {}) =&gt; {
  return (root) =&gt; {
    root.walkRules((rule) =&gt; {});
  };
});
</code></pre>
<p>In the above snippet, we can parse every single css rule. Throw a console.log and see for yourself. As for our example we are only interested in declarations with an annotation. So let’s filter the rest out.</p>
<pre><code class="language-js">import * as postcss from &#x27;postcss&#x27;;

export default postcss.plugin(&#x27;postcss-disable-annotation&#x27;, (options = {}) =&gt; {
  return (root) =&gt; {
    root.walkRules((rule) =&gt; {
      const disable = rule.nodes.filter((x) =&gt; {
        return x.type === &#x27;atrule&#x27; &amp;&amp; x.name === &#x27;disable&#x27;;
      });

      if (disable.length === 1) {
        const color = disable[0].params;
        rule.removeChild(disable[0]);
      }
    });
  };
});
</code></pre>
<p>We&#x27;ve kept only the annotated declarations and the ones named &#x27;disable&#x27;. We expect to have only one in each selector. If that&#x27;s the case we&#x27;re free to delete the <code>@disable #efefef</code> entry and start building the output.</p>
<pre><code class="language-js">var postcss = require(&#x27;postcss&#x27;);

module.exports = postcss.plugin(&#x27;postcss-disable-annotation&#x27;, function (opts) {
  opts = opts || {};

  return (root) =&gt; {
    root.walkRules((rule) =&gt; {
      const disable = rule.nodes.filter((x) =&gt; {
        return x.type === &#x27;atrule&#x27; &amp;&amp; x.name === &#x27;disable&#x27;;
      });

      if (disable.length === 1) {
        const color = disable[0].params;
        rule.removeChild(disable[0]);

        const new_rule = postcss.rule({
          selector: &#x27;&amp;.disabled&#x27;,
        });
        const decl = postcss.decl({
          prop: &#x27;color&#x27;,
          value: color,
        });
        new_rule.append(decl);
        rule.append(new_rule);
      }
    });
  };
});
</code></pre>
<p>Creating a new rule, we pick the selector&#x27;s name, and append the color declaration. To wrap things up, place the new rule inside the rule where the annotated declaration used to be. Removing the ES6 features that require Babel, we&#x27;re free to paste the code in index.js. The output is correct and the crowd goes wild.</p>
<pre><code class="language-css">.link {
  text-decoration: none;
  color: red;
  &amp;.disabled {
    color: #efefef;
  }
}
</code></pre>
<p>Well that&#x27;s it. The <a href="http://api.postcss.org">PostCSS API docs</a> are must read for any follow up ideas.</p>
<p>PostCSS opens infinite possibilities. I strongly believe that any developer willing to invest some time in this ecosystem, will be rewarded.</p>
<p>Might want to check out <a href="http://slides.com/dnlytras/posts#/">my presentation</a> too.</p>
<p>Cheers!</p>]]></content:encoded>
            <author>dnlytras@gmail.com (Dimitrios Lytras)</author>
        </item>
    </channel>
</rss>