Easier Discord Slash Command Setup in Node.js

Suppose you are trying to create a Discord bot and slash commands. You are using Node.js and TypeScript and you stumble upon the Discord.js library. I mean, how could you miss it? It shows up in search results before even Discord’s official documentation so you’d be forgiven to think that it is official. And you might look at some of the code and examples and think, “Wow, this sure is… a lot of words. And there are for loops?” And maybe you’d think that this is just how Discord engineers want developers to do things. And it might make you think that this is really fucking complicated.

Surprise! Discord.js is not an official Discord library. And where slash commands are concerned, the code makes it significantly more complicated than it needs to be. It’s so complicated that I’d argue it’s doing a disservice to Discord and the talented Discord.js developers who maintain the package.

So without further ado, if you want to make a very simple Discord slash command, it’s as simple as this:

  1. Make the API call to create the slash command

In Discord.js, you’d use their SlashCommandBuilder class to configure it. You can do that and just console.log the output as JSON, console.log(command.toJSON()). It’ll give you the payload of the API command you need.

BOT_TOKEN='replace_me_with_bot_token'
CLIENT_ID='replace_me_with_client_id'
curl -X POST \
-H 'Content-Type: application/json' \
-H "Authorization: Bot $BOT_TOKEN" \
-d '{"name":"hello","description":"Greet a person","options":[{"name":"name","description":"The name of the person","type":3,"required":true}]}' \
"https://discord.com/api/v8/applications/$CLIENT_ID/commands"

That’s an example from https://docs.deno.com/deploy/tutorials/discord-slash/#step-2%3A-register-slash-command-with-discord-app

  1. Setup the handler for the slash command

The Discord.js library has a lot of code about dynamically loading slash commands and using an array or a Collection so you can do dynamic lookups using names. None of that is necessary at all. You’d do that if you have a lot of slash commands and they’re changing so frequently that you can’t be bothered to manually maintain a list of commands to support. Tutorial docs do not need that.

The simple version is just a switch statement. Assuming you are using the Discord.js library with an instance of client

  discordClient.on(Events.InteractionCreate, async (interaction) => {
    if (!interaction.isChatInputCommand()) {
      logger.info('Command is not a chat input command');
      return;
    }

    switch (interaction.commandName) {
      case 'your-command-here': {
        const { options } = interaction;
        const code = options.getString('code');
        if (!code) {
          logger.info('No code provided');
          await interaction.reply({ content: 'No code provided', ephemeral: true });
          return;
        }

        const { id: discordUid, username: discordUsername } = interaction.user;
        await interaction.reply({ content: 'Successfully verified!', ephemeral: true });

        break;
      }
    }
  });

That’s it! Just like handling every webhook in existence! And if you want type safety or you want to handle it dynamically, there are easy ways to do that, too.

I don’t like speaking ill of open source projects. Discord.js is a powerful library and I appreciate the care that went into it. I know that the docs I’m complaining about come from someone trying to be helpful and offering solutions that they think will help. But the complexity of the docs turned me off to the whole process and ultimately cast both the library and Discord itself in a bad light until I realized the complexity was coming from implementation details.

React Server Actions - Versioning, Filenames, and Other Considerations

I’ve been building a new thing called Ampwall since the beginning of 2023 and it’s going quite well. I announced it via Twitter post on September 29 (same day as Woe’s fifth album release!) and the response was intense. I’m doing it full-time and I am optimistic.

Ampwall is built using Next.js. Specifically, I’m using Next.js 13’s divisive App Router. I like it quite a bit. I server-rendering, I think we ship too much JavaScript to clients and make our lives as engineers harder when we insist on writing APIs for things that should just use server templates. But I also love React and TypeScript, so I don’t want to give up on these if I don’t have to. Next.js 13’s embrace of React Server Components and Server Actions ticks all the boxes: server rendering with TypeScript and React! Beautiful.

Vercel announced Server Actions were stable last week as part of their Next.js 14 release event. This has spawned a lot of conversation, critique, and drama, most of which I find rather dull or knee-jerky or immature. But one topic piqued my interest: versioning. How do you version server actions? What do we need to consider when deploying new versions?

The problem

Version skew is well-defined and understood by software engineers. It can be a challenging problem for products with long-running client sessions. One benefit of the explicit API client-server relationship is the explicit definition and publishing of public API interfaces. Experienced engineers intuitively understand this. It’s typical for changes to be flagged during code reviews: “This will break old clients”, we should mark this argument optional and watch logs until 99% of users are on the new version”, or “We should put this endpoint in a v2 namespace”.

Server Actions essentially create RPC endpoints in Next.js servers. This is magical and it works wonderfully. When you define a function with the magic 'use server' directive or put it in a 'use server' file, it will be executed by the server.

'use server';

// This can be called from the client but it will execute on the server
export async function foo() {
  return 'bar';
}

As I understand it, this works by creating an ID representing this function and outputting code that, from a client, makes a POST and references the ID as the value of a header called Next-Action.

So then: how do we version Server Actions? More urgently, if I deploy five versions of my app across five days and a client fails to reload their window past day 1, what will happen when they interact with foo()?

What’s in a name?

The answer to all of this comes back to how the Next-Action ID is generated. From what I can tell, the ID comes from this function. It creates a SHA1 hash using a combination of the file name and function name. This matches what I’ve seen: a function called foo in a file called bar will always have the same ID, regardless of its implementation. So where versioning is concerned – if we want to introduce a breaking change to a Server Action, maybe adding a required argument to unlock some new behavior without breaking folks who haven’t reloaded yet – we can create fooV2. We could do something like this:

'use server';

export async function foo(optionalArg?: string) {
  if (optionalArg) {
    // do the new thing
  } else {
    // do the old thing
  }
}

export async function fooV2(requiredArg: string) {
  return foo(requiredArg);
}

But this is a mighty footgun: renaming a function changes your server’s public interface. “Well, duh, of course, just like renaming an API endpoint is a breaking changes.” Yes, but React Server Actions are a new paradigm with a fuzzy line deliberately drawn between client and server, invisible to an engineer working in a Client Component and easily confused with a plain ole backend async function if you’re working in the server.

React Server Actions bring us one simple refactor away from introducing version skew in a way that might be extremely surprising. Reorganize your files? Fix a typo in a function? Rename something to be more explicit or fit its purpose better? Breaking API changes.

Don’t give me bad news

Ok, so you fixed a typo in a function while doing something else. It’s so trivial that nobody thought about it during a code review, you obviously spelled “cart” as “crat” and that needed to be fixed. Version skew has been introduced but at least you’ll know from logs, right?

Nope. When a client POSTs to a Server Action function that does not exist, the server swallows it silently and returns 200. You will not know something is wrong unless the client that called it is looking for a response and fires a loggable error.

New day, new best practices

What do we do with this knowledge? I’m doubling down on some approaches and considering others.

First, the things I’ve been doing since day 1: no anonymous functions with use server, no Server Actions defined in files that aren’t explicitly dedicated to them. I put all of my Server Actions in a folder called controllers because I treat them like the C in MVC. This limits the likelihood of having to move a Server Action to another file and changing its hash.

Next, I’m considering something else: the server action itself is just a one-line wrapper around an implementation.

'use server';

export async function fooV1(input1: string, input2: number) {
  return fooImpl(input1, input2);
}

This offers a few benefits: it makes Server Actions look weird in a way that will hopefully stand out to someone reviewing, it decreases the likelihood that some well-intentioned engineer will be mucking about in a file where they can accidentally cause problems, and it limits the responsibility of the Server Action to the routing layer of the server.

How could this be improved?

It seems clearly to me that we need a way to explicitly set or seed a Server Action.

'use server';

// Great opportunity for a decorator
@actionKey('foo')
export async function foo() {
  // impl
}

// Or just add it as metadata here?
foo.actionKey = 'foo';

// Maybe a string literal?

export async function foo() {
  actionKey`foo`;
}

This breaks the dependency between the declaration and the public interface, or at least gives us a way to control the output.

Is this really worth it?

When I posted about this on Reddit, my recommendation that we need a way to control this was met with a great quip: “Like, say, a URL?”. That’s a good point. If we’re explicitly keying our endpoints, we’re sort of just creating an alternate path to a public API, aren’t we?

I’d say we still benefit from avoiding the ritual boilerplate of API declarations. Server Actions, for me, have been part of a mighty improvement to my Developer Experience, and I don’t think that explicitly keying them would negate their benefit. If anything, explicit keys would help engineers understand that their Server Actions are already API endpoints. It would make them more predictable, less magical, and help folks doing code reviews or tracking regressions. I think we’d be able to find a way to add an eslint rule to require explicit keys.

I’m going to keep using Server Actions but I’m watching this closely. They’re still young and I’m optimistic that the experience of using them and managing projects will only improve over time.

Next.js App Router Client Cache Busting

UPDATE: 24 hours after posting this, Vercel responded to the outstanding issue. A few days later, they started a discussion about it. They’ve committed to improving the caching experience.

ORIGINAL POST

Next.js 13’s App Router has caused no shortage of controversy. Lots of people are upset about Server Components. Some blogs seem to think the sky is falling. Me? I love it. Love. Big heart eyes, harps strumming, floating to a cloud.

As much as I love React, as confident as I feel with it and TypeScript, I think that the benefits of the SPA are lost on most products. I think that the massive amount of code we push to clients is embarrassing. The rituals of API requests, the complexities of client state management… I’m over it. At the risk of sounding like an old man: I miss Rails. I miss having a server that can talk to my database and spit out HTML that loads quickly on clients. One deployment, one environment, way less boilerplate. Obviously, as a professional React developer, there are tons and tons of amazing things that simply cannot be done with server-rendered pages; like I said, I love React. But I think we’ve reached peak SPA.

React Server Components give me what I want. The server talks to my database, talks to APIs, prepares data and renders as much HTML as possible. Then it lets me elegently drop into the client as needed. React Server Actions give me simple RPC calls. The move to RSC is accelerating modern approaches to CSS-in-JS (I’m currently working on a personal project using Panda and it is loevly!) and even though things are changing fast, it’s giving me what I’m looking for.

But the App Router has its share of problems. Among them is its highly opinionated caching rules, particularly its client-side caching rules. There is an ongoing discussion about this in their GitHub issues. It is the single most commented open issue in the project right now, 286 comments at the time of my writing. You can find it at https://github.com/vercel/next.js/issues/42991. Vercel so far have not responded to it.

I slapped together a quick workaround for the problem. It relies on revalidatePath, a function that Vercel limits usage of in its free tier, so it will not be for everyone. But if you’re hosting somewhere without such a restriction – maybe you’re hosting on a Node.js server so you’re not as concerned about counting function invocations – here’s my approach.

High Speed FTDI + Android Comms (OR) Why Am I Always Reading 0 Bytes?

I’ve been working on a project with some very unfamiliar tech. The project involves communication between a new Android app (Kotlin) and an FTDI 232R connected to an Arduino. I encountered a problem that baffled everyone on the team for weeks and was about to be labeled a “general incompatibility between FTDI devices and Android apps” until I stumbled upon the solution. Before I describe the solution, I’m going to document some basic details. While working on this, I was unable to find any examples of people having this problem, so there was an intense feeling of isolation as I struggled on and off for weeks to resolve it. My hope is that this might help someone identify the same problem in their system.

tl;dr

If you’re experiencing mystery “0 bytes available” errors, you might need to change your latency timer setting. The problem is described here. I also strongly recommend you read the longer document from which that excerpt is drawn, AN23B-04 Data Throughput, Latency and Handshaking. We immediately resolved our issue with a call to setLatencyTimer((byte) 1); and very small reads (64 bytes at a time, no more) but ultimately settled on an event character and larger reads. Full details below.

Detailed Notes

Our Arduino’s firmware is capable of sending a few different messages across the wire. Each message is small, anywhere from 16 bytes up to around 256. Most of these are on-demand: send a command from the application, the Arduino decodes it, then it sends one message in response that is either an ACK or the data that you’ve requested. There is one exception: one particular message from the app will trigger the start of an infinite stream of 44-byte messages at a frequency rate specified in the request. In this case, the Arduino is reading sensors, performing some basic analysis, and spitting it out across the wire for the app to do with as it pleases. The app reads this constant stream of bytes, does its own analysis, puts it on the screen, etc,…

Our minimum acceptable streaming rate is 300hz but we hope for closer to 500hz or greater, so our baud is currently 460800.

We encountered an issue whereby the app was constantly being told 0 bytes were available for read. The problem was extremely inconsistent and weird. The following were true:

  • We could ALWAYS open the port from our app.
  • We could ALWAYS transmit successfully from the app to the Arduino. We knew this because logs on the firmware indicated that the right bytes arrived in the right order.
  • We would SOMETIMES receive the correct response. It was all or nothing: sometimes we would query for available bytes and be told 0, other times we would see the expected number.
  • We could RARELY start the data stream. Once we sent the message to start streaming, the app would always believe there were 0 bytes available for read. Once that state was encountered, no other messages would be sent across the wire until we rebooted the firmware. It seemed to be more likely to fail as our streaming rate exceeded 100hz. Our target was 300hz or greater, so this was a serious problem!

Adding to the mystery, this seemed specific to the FTDI chip. Our first draft of this used the Arduino’s programming port for serial data transfer at 115200 baud. We were losing a lot of packets from the lack of flow control but it never failed to respond to messages.

More troubling was the fact that a C++ test application seemed to communicate correctly. This pointed towards a code problem with the Android app.

We tried three different libraries in attempts to resolve this. Those were:

  • usb-serial-for-android - An open-source library that is pretty well maintained and offers a lot of features. Unfortunately, it doesn’t support automatic flow control, so we worried we wouldn’t be able to use it long term.
  • UsbSerial - Another open-source library. This one is not nearly as well maintained and it has quite a few open issues that describe some pretty heinous bugs. I opened an issue after I found that calling the wrong method during initialization would result in all your sent messages being replaced by two 0 bytes for every one byte in your message! Brutal. It supports flow control but it has so many problems that I unfortunately couldn’t recommend it, even if it supported what we needed.
  • FTDI’s official d2xx - The official closed-source library for FTDI devices. It hasn’t been updated in two years but by virtue of being official, we expected it to be more reliable or at least more full-featured. The closed-source part is a bummer and I think it would be a much better library if not for that, but that’s another story. This was the library we wound up using and we will continue to do so.

All three of these libraries exhibited the same behavior! This started looking like a major issue with FTDI devices. I ordered a few Prolific PL2303-based serial cables to test as an alternative but kept researching in the meantime.

I began looking at FTDI’s official test apps and their example Android app code. The example code is… not… great… but in taking notes, I came across a mysterious call to setLatencyTimer(). This led me to this, which appeared to describe our problem exactly. It specifically remarks, “While the host controller is waiting for one of the above conditions to occur, NO data is received by our driver and hence the user’s application. The data, if there is any, is only finally transferred after one of the above conditions has occurred.” I did some more reading and found the longer AN23B-04 Data Throughput, Latency and Handshaking which explained this and many other concepts. This document was particularly enlightening. I feel like the embedded software development world is full of extremely dense, unapproachable technical specs that assume a ton of highly specific knowledge; by comparison, this document was a breath of fresh air and explained things from a high enough level that I came away feeling more capable of anticipating behavior as I continued troubleshooting.

It appeared that we were never hitting any of the three rules fast enough to trigger a read. It still doesn’t totally make sense to me; I feel like we should have eventually hit 4Kb to trigger the send, but maybe I never let it sit long enough to get there? Or maybe there was another timeout value that was clearing the buffer before then. What I do know is that if I set the latency timer down to 1ms and ensured we never requested more than 64 bytes at a time, our data read problems went away. We could stream at 500hz and messages would usually start showing up as soon as we hit the button. This change was as simple as setLatencyTimer((byte) 1); and making sure that we never requested more than 64 bytes during a call to read. The immediate problem was solved and it was clear that we did not have some incompatibility between FTDI and our Android app.

I say that it would “usually” start showing up because it still exhibited strange behavior. Very often, I would start the stream through the app’s interface and nothing would happen. Then I’d send another message (“get hardware version”) and not only would it get my hardware version, it would also recognize that data was streaming in. Other times, I would request our largest payload, a system configuration, and it would return 31 bytes of the 200+ we expect. Just like with the stream, I’d send any other message (“get firmware version”) and it the remaining 200+ bytes would show up.

I wound up making a few other changes to resolve this problem and improve the behavior overall.

First, using more information gleaned from the Data Throughput, Latency and Handshaking document, I thought that we be better off using the FTDI’s support for Event Characters than the latency timer. Our encoding rules use a 0 byte as a delimiter, so it was an obvious choice. This allowed me to increase our maximum read size up to 256 bytes, which helped in the event that our read loop was delayed and we had to quickly get through a backlog of data. (I could probably go higher but I’m being pretty careful right now, I want to keep things moving.) Finally, I modified the read loop to also be responsible for writes, added a FIFO queue for outgoing messages, and (crucially) a 50ms timeout of the loop after every single message sent. The 50ms timeout was the most significant piece – it was the final change that ensured that we stopped seeing partial messages or messages that only arrived after a subsequent send. I don’t have a good answer for why that was necessary but given the complexities of the d2xx library, reading from USB in general, the FTDI and its buffers, and the Arduino, it’s not too surprising that things can get out of sync if you’re moving fast.

With the implementation of the event character, the buffered writes added to the loop, and the timeout after writing, we appear to be running smoothly. So smoothly, in fact, that I was able to remove the setLatencyTimer call entirely and just leave it at its default. As configured, data is sent as soon as a 0 is hit or 256 bytes are available, whichever comes first. (Typing this out, I realize that I should probably just set it to the exact size of our largest message, there’s no way it could ever be smaller and having an incomplete message does us no good!)

To summarize, we went through two rounds of improvements that changed our situation from bleak to beautiful.

Round 1:

  • Set the FTDI’s latency timer to 1ms
  • Limit our max read size to something small to prevent a “jerky” feel

Round 2:

  • Revert latency timer to default
  • Enable an event character keyed to our delimiter, a 0 byte – this is the key
  • Set a max read that’s a bit bigger than our typical messages to help us catch up if we ever have a huge backlog and want to get the queue down (again, I don’t know what this situation is)

As it happens, the d2xx library is the only one of the three that supports configuration of latency timer, event character, and flow control. One of the two open source libraries supports the latency timer, the other claims to support flow control, and neither supports the event character. Only the closed-source official FTDI library d2xx supports all three, so we’ll be sticking with that.

It appears that our use case of extremely high streaming rate combined with tiny messages at a very high baud is somewhat unique. If we had been sending larger messages at a slower rate, I don’t think we would have encountered this. Our 44 byte messages at 300hz were the problem.

I spent many lonely weeks fighting with this. Failure to resolve it would have been a major problem for the project. In the end, the solutions I found were new to the whole team, which included many people with much more experience than me when it came to FTDI chips, which should go to show you how esoteric some of these configuration parameters very well may be. This is my first project writing Kotlin, working on Android, or using FTDI devices at all, so I while I’m disappointed that it was such an unpleasant struggle to get it done, I am pleased to have it behind me. I sincerely hope this helps somebody avoid going through the same experience.

A React Dev's Preliminary Thoughts on Jetpack Compose

I recently, unexpectedly, found myself learning the basics of Android development so I could manage a large project. When it took longer than expected to find the right people to do the development, I wound up also applying some of this knowledge by porting over a very large amount of code from Python and TypeScript to a prototype app. The goal was to get a head start on implementing business logic, then it turned into business logic plus communication with hardware so we could verify some external systems, and finally it turned into that + some toy interfaces so we could easily demonstrate to incoming team members how the business logic could be applied to the view.

This was and still often is a painful experience, as working in an unfamiliar language, framework, and platform so often is. Luckily, while the framework (Android) and the platform (also Android) are often very foreign, Kotlin itself has been a dream. In many ways, it feels like the language I’ve always wanted: the friendliness of Ruby with a powerful type system that has Java’s specificity but feels closer to TypeScript in its expressiveness, with the performance of the JVM. Coroutines are amazing. Now that SharedFlow has been released, it also provides concurrency and message passing that’s as easy as Go’s goroutines and channels. Wonderful!

My classes and their tests moved from Python and TypeScript to Kotlin with absolutely no difficulties. There’s nothing like porting over thousands of lines of code and unit tests to make you feel more comfortable with a language. I did my best to modify it to look more like idiomatic Kotlin but I’m looking forward to having someone review it and taking me to school.

But this isn’t about Kotlin, it’s about Jetpack Compose. I first read about it some months ago and noticed then that… wow, it sure sounds like React for Android. Plenty of comments online said the same thing, or said that it was just Flutter for Android, or SwiftUI for Android, but plenty of people said those were just React for X. As a React developer, this made me happy, since I like the idea of working in other languages and platforms. At the time, I didn’t know if or when I’d apply it. And then…

…I wound up here. It became clear that the best way to prove that my business logic really worked, test our new Arduino-based robot control system, and start testing our new mechanical components would be to provide a simple interface that replicated the states of our legacy product. While I felt very comfortable with Kotlin and reasonably comfortable with Android navigation, fragments, and activities, I did not feel at all comfortable with the classic UI framework. It felt very jQuery to me, like going back in time to the bad old days of web development. Jetpack Compose, on the other hand, had just reached alpha (alpha 7 by the time I got to it) and reviews were positive. Since this was just a prototype interface, it seemed like the best option for slapping something together. The person or people who inherit the app could then decide whether to continue betting on Jetpack Compose or fall back to the battle tested, if cumbersome, classic.

I’ve spent a few hours kicking the tires of Jetpack Compose. Since I only get to be a total amateur for a little while, I thought it would be good to jot down some of my preliminary thoughts while they’re still fresh.

Tl;dr: I like it!

To get it right out of the way, I really like Jetpack Compose! I’d happily continue building with it. I’ve seen comments on Android forums from experienced developers who say that it is the future of Android development and I hope that is true. It feels just like React in so many ways. Almost all of my experience transferred immediately: useState becomes mutableState, props are still props, it has its own version of useEffect. The way you organize components (composables) is the same, so you can think about the view and its state almost exactly the same way you would think about it in React! This is awesome.

I was able to bang out my first prototype interface, a crude recreation of one of our most complex, dynamic interfaces, in a couple of hours. Today, I very quickly prototyped another interface – a simple control panel that spits out some dynamic values coming from sensors and provides controls to load additional information or adjust the rate of the streaming data – in about 45 minutes. All of the methods that interact with the business logic plug right in. I’m not even using their effects yet, just passing props down, and I feel productive.

Some stray thoughts on the positives:

  • The fact that it’s Kotlin all the way down is extremely refreshing. Working in React forces one to context switch constantly: first it’s TypeScript, then it’s TSX, then it’s CSS, oh well here it’s a Styled Component so it’s TS-in-CSS. Being able to just live and breath one language is wonderful. I’ve read about plenty of other languages or frameworks that allow this – Elm comes to mind – and my thought had always been “But I’m good at all these things, I’m productive, it’s what everyone does. Why change it?” Avoiding context switching is real!

  • I think I like the fact that the builtin layout composables are so rigidly defined. That is to say, I appreciate that instead of a div with display: flex; and flex-direction: column; it’s just Column. It’s almost like this is what happens when you design a layout system in 2020 instead of gradually bolting on behavior over many decades…

  • I haven’t used it yet but the fact that it has an official Animation library is exciting. This tutorial paints a great picture.

  • I really really really like being able to create MutableState inside of a ViewModel. Hell, I really like ViewModel in general. One of my biggest gripes about React right now is the way it can hide complexity in hooks, letting some component deep down in the depths of a view subscribe to some external event and hook into all kinds of crazy side-effects. Hooks also force you to think exclusively in terms of side-effects, like you’re building a huge Rube Goldberg machine. By comparison, MutableState in a ViewModel feels like the best of worlds. Data flow is still unidirectional – I’m still feeding props down through composables – but the option of having some of this logic originate in an isolated world dedicated to business logic makes things a lot more straightforward.

  • It’s nice that the body of a composable is just Kotlin. Not some Kotlin-markup hybrid where you can use expressions but not statements ala TSX/JSX – full-on Kotlin. Even though I don’t feel limited by TSX these days, I think this will cut down the learning curve for new devs significantly compared to what one finds with React. Being able to use a when expression or assign variables right in the body of the code is freeing. It’s also a double-edged sword that I’ll comment on later.

It’s not all roses, though. Some of my immediate concerns:

  • The in-IDE previews are slooooow. I find that I don’t preview things as much as I’d like to because every save causes a rebuild that takes forever. I’m sure this must be part of its alpha-ness, but it really stands out in contrast with how well the rest of it seems to work.

  • As much as I appreciate the explicit, rigidly defined composables (Column, Row, etc,…) and their unique styling parameters and modifiers, I worry that it’s going to take forever to really get fluent in making things look good. Say what you will about CSS but at least you can be reasonably sure that most of the properties you know for one element will work on another. Maybe that’s too generous, but at least you’ll know where to start. This isn’t as true with Compose. While I picked up positioning of composables and their contents pretty quickly, making things look the way they do in the wireframes is still a challenge because I’m constantly digging into them to see what they can do.

  • The aforementioned issue wouldn’t be as much of a problem if not for the lack of best practices and examples out there. Almost worse, a lot of what you can find right now refers to earlier versions and removed APIs. Luckily, I’m not trying to build something for production and the rest of it is familiar enough that I’m confident I can invest some time in just about every issue and still be ahead of schedule. But it’s worth considering if you’re not confident with the mental model required to build this kind of app.

  • It’s going to need to improve its application state management story. While Context works fine for most of us these days and Compose has the corresponding Ambient, there are still cases where the web needs Redux and it seems likely that the same will be true for Android. There is a ReduxKotlin project in the works, so maybe it will take off. Or maybe there’s a better option, something more appropriate for this ecosystem.

  • It’s also going to need a better solution for forms. My kingdom for React Hook Form for Jetpack Compose!

A last one that I want to expand upon beyond a quick bullet point:

I worry that the ability to mix composable function calls and Kotlin code could lead to readability and predictability challanges.

The TSX syntax places limitations on what I can do once I start describing the view. Flow control is limited, side-effects are verboten. It’s predictable, it’s clear. It can be noisey if someone abuses ternaries or boolean statements to provide flow control, but we can train ourselves to expect it.

Jetpack Compose enforces no such order. I can do anything, anywhere, at any time. This can be nice if used sparingly but it doesn’t take much to imagine it going off the rails. Picture a wall of view code where, for some reason, a variable is reassigned 200 lines deep. Suddenly, the predictable view is a little less predictable. Refactoring is just a little trickier. Composables become just a bit more complex and harder to predict. It will be important for the community to establish best practices around this!

So that’s where I stand right now. It’ll be interesting to review this in a few months and see how much of this was accurate. In the meantime, I am optimistic.

subscribe via RSS