yaymukund’s weblog

Headless NixOS + Raspberry Pi + nixbuild from OSX!

I had to install NixOS on my Raspberry Pi 4, Model B recently. I didn’t have the HDMI→ micro HDMI cable so I decided to install it headlessly. This is a fairly intricate setup because I wanted to:

  1. Cross-compile from aarch64-darwinaarch64-linux.
  2. Remote build using nixbuild.net to speed up build times.
  3. Build from my memory-constrained 1GB Raspberry Pi.*

It involved a few gotchas which I want to document here.

* Note: Although it is possible to offload the compilation to nixbuild, you still need memory on the Pi to evaluate the nix code. There is an open issue for eval memory usage which may alleviate this.

Making a NixOS SD Image

First, make a flake.nix that produces the SD image:

{
  inputs = {
    nixos-generators.url = "github:nix-community/nixos-generators";
    nixos-hardware.url = "github:NixOS/nixos-hardware/master";
    nixpkgs.url = "nixpkgs/nixos-unstable";
  };

  outputs =
    { nixos-generators
    , nixos-hardware
    , nixpkgs
    }: {
      # This produces the install ISO.
      packages.aarch64-linux.installer-sd-image =
        nixos-generators.nixosGenerate {
          specialArgs = { inherit dotfiles-private; };
          system = "aarch64-linux";
          format = "sd-aarch64-installer";
          modules = [
            ./modules/hardware-configuration.nix
            nixos-hardware.nixosModules.raspberry-pi-4
            ./modules/base.nix
            ./modules/builder.nix
            ./modules/networking.nix
            ./modules/users.nix

            # Anything else you like...
          ];
        };
    };
}

Onto the modules…

modules/base.nix

{ pkgs, ... }: {
  programs.ssh.extraConfig = ''
    Host nixbuild
        HostName eu.nixbuild.net
        User root
        PubKeyAcceptedKeyTypes ssh-ed25519
        ServerAliveInterval 60
        IPQoS throughput
        IdentitiesOnly yes
        IdentityFile ~/.ssh/nixbuild

    # SSH config for your favorite code forge, needed so you can clone your
    # repository containing flake.nix for rebuilds.
  '';

  # Not strictly necessary, but nice to have.
  boot.tmp.useTmpfs = true;
  boot.tmp.tmpfsSize = "50%"; # Depends on the size of your storage.

  # Reduces writes to hardware memory, which increases the lifespan
  # of an SSD.
  zramSwap.enable = true;
  zramSwap.memoryPercent = 150;

  # Needed for rebuilding on the Pi. You might not need this with more
  #memory, but my Pi only has 1GB.
  swapDevices = [{
    device = "/swapfile";
    size = 2048;
  }];
}

modules/builder.nix

The remote builder lets us do two things:

  1. Cross-compile the SD image from a different architecture (aarch64-darwin in my case).
  2. Remote-build from the Raspberry Pi 4B. Compiling things locally on a Pi takes longer.

I (happily) use nixbuild.net, but you don’t have to. Any builder will do, as long as it can build aarch64-linux.

{
  nix.settings = {
    trusted-users = [ "my_username" ];
    builders-use-substitutes = true;
  };
  nix.distributedBuilds = true;
  nix.buildMachines = [{
    hostName = "eu.nixbuild.net";
    sshUser = "root";
    sshKey = "/home/my_username/.ssh/nixbuild";
    systems = [ "aarch64-linux" ];
    maxJobs = 100;
    speedFactor = 2;
    supportedFeatures = [
      "benchmark"
      "big-parallel"
    ];
  }];
}

modules/networking.nix

It’s important to get this right with a headless setup or else you won’t be able to SSH to diagnose any other issues. You probably want to use a secrets management system to configure the WiFi passkey.

{ ... }: {
  # Setup wifi
  networking = {
    hostName = "my_hostname";
    wireless.enable = true;
    useDHCP = false;
    interfaces.wlan0.useDHCP = true;
    wireless.networks = {
      my_ssid.pskRaw = "...";
    };
  };

  # And expose via SSH
  programs.ssh.startAgent = true;
  services.openssh = {
    enable = true;
    settings = {
      PasswordAuthentication = false;
      KbdInteractiveAuthentication = false;
    };
  };

  users.users."my_username".openssh.authorizedKeys.keys = [
    "ssd-ed25519 ..." # public key
  ];
}

modules/users.nix

{
  users.users.my_username = {
    isNormalUser = true;
    home = "/home/my_username";
    extraGroups = [
      "wheel"
      "networkmanager"
      "audio"
      "video"
    ];
  };

  security.sudo.execWheelOnly = true;

  # don't require password for sudo
  security.sudo.extraRules = [{
    users = [ "my_username" ];
    commands = [{
      command = "ALL";
      options = [ "NOPASSWD" ];
    }];
  }];
}

modules/hardware-configuration.nix

I don’t think there’s a good way to generate this before installing. Luckily, lots of people with Raspberry Pi 4Bs have put their hardware-configuration.nix online. Any of them should work. Here’s mine:

# Do not modify this file!  It was generated by ‘nixos-generate-config’
# and may be overwritten by future invocations.  Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:

{
  imports =
    [ (modulesPath + "/installer/scan/not-detected.nix")
    ];

  boot.initrd.availableKernelModules = [ "xhci_pci" ];
  boot.initrd.kernelModules = [ ];
  boot.kernelModules = [ ];
  boot.extraModulePackages = [ ];

  fileSystems."/" =
    { device = "/dev/disk/by-uuid/44444444-4444-4444-8888-888888888888";
      fsType = "ext4";
    };

  swapDevices = [ ];

  # Enables DHCP on each ethernet and wireless interface. In case of scripted networking
  # (the default) this is the recommended approach. When using systemd-networkd it's
  # still possible to use this option, but it's recommended to use it in conjunction
  # with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
  networking.useDHCP = lib.mkDefault true;
  # networking.interfaces.end0.useDHCP = lib.mkDefault true;
  # networking.interfaces.wlan0.useDHCP = lib.mkDefault true;

  nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
  powerManagement.cpuFreqGovernor = lib.mkDefault "ondemand";
}

Once you have SSH access, you can generate it with nixos-generate-config to verify it matches.

Putting it all together

  1. (From aarch64-darwin) Build the SD image.
    # -max-jobs 0: needed to force remote building for cross-compilation.
    # -system aarch64-linux: we need to override this bc we're on darwin.
    nix build \
        --max-jobs 0 \
        --system aarch64-linux \
        .#installer-sd-image
    
    zstd \
        -d result/sd-image/*.img.zst \
        -o installer-sd-image.img
    
  2. (From aarch64-darwin) Write it to your SD card.
    diskutil unmountDisk /dev/diskN
    sudo dd \
        if=..path/to/installer-sd-image.img \
        of=/dev/diskN \
        status=progress bs=1M
    diskutil eject /dev/diskN
    
  3. Put the SD card in your Raspberry Pi and start it up. It should appear on your local network.
  4. (From aarch64-darwin) ssh my_hostname and you should see it.

Rebuilding locally on the Pi

To rebuild on the Pi, there are a few more steps.

First, you’ll need to add the non-SD build target to your flake.nix:

nixosConfigurations.dave = nixpkgs.lib.nixosSystem {
  specialArgs = { inherit dotfiles-private; };
  system = "aarch64-linux";
  modules = [
    ./modules/hardware-configuration.nix
    nixos-hardware.nixosModules.raspberry-pi-4
    ./modules/base.nix
    ./modules/builder.nix
    ./modules/networking.nix
    ./modules/users.nix

    # Anything else you like...
  ];
};

Then, a few manual steps:

  1. ssh into your Pi and ssh-keygen -t ed25519 /.ssh/nixbuild
  2. ssh into your Pi and ssh-keygen -t ed25519 /root/.ssh/nixbuild (as root)
  3. Add the public key to your nixbuild.net account.
  4. git clone your config on the Pi.

(I’m not sure why both root and non-root keys are needed for nixos-rebuild to do its thing here. If you know, please tell me.)

Then you should be able to run:

nixos-rebuild switch \
    --use-remote-sudo \
    --max-jobs 0 \
    --flake /path/to/dir/containing/my/flake/

Takeaways

Nice things

Potential improvements

References

Using Nix on Flakes on OSX

I use Nix Flakes on OSX to setup my development environment. I’ve not seen anyone else document this approach. Maybe you will find it useful.

What’s in a development environment?

By “development environment,” I mean three things:

  1. Adding and mutating shell environment variables (e.g. $EDITOR)
  2. Installing command line applications (e.g. /usr/bin/nvim)
  3. Adding config files (e.g. $HOME/.config/nvim/init.lua)

Unfortunately, 2 and 3 are “impure” according to Nix because they require access to mutable paths. But there are simple workarounds:

So if I can mutate environment variables— including $PATH— then I can do everything!

But first, I need to explain Flakes a little bit.

A Nix Flakes primer

Sorry, I feel like every Nix article that touches on Flakes has to explain Flakes from scratch. I’ll try and stick with what’s relevant to what I’m doing. If you’re interested in a deep dive, I recommend Xe Iaso’s Nix Flakes: an Introduction.

Flakes, at their core, are a configuration format for the Nix toolchain. This format accepts inputs, which are dependencies that live in the Nix store, and produces outputs, which are read by various tools. For example, the nix CLI tool’s nix build subcommand builds the packages.default output for the flake.

See? That wasn’t so bad, was it? If this still seems a bit abstract, read on for an example.

Note: In versions of nix prior to 2.7, packages.default was known as defaultPackage. If you care about compatibility with old versions, you may want to alias it to point to packages.default.

Designing a development environment

Using Flakes, I need to mutate environment variables. To do this, I’ll use a little-known command called nix print-dev-env:

nix print-dev-env - print shell code that can be sourced by bash to reproduce the build environment of a derivation

If you run nix print-dev-env, it will build the packages.default output of your current flake.nix.

This approach has two steps:

  1. Make a packages.default output that mutates shell environment variables as desired. For example, it should add /nix/store/abc123-nvim-wrapped/bin to the $PATH.
  2. Source the output of nix print-dev-env in my development shell.

Putting the pieces together

To construct the packages.default output, you can use pkgs.mkShell:

# In flake.nix
let
  neovim-with-config = neovim.override {
    customRC = ''
      lua << EOF
        -- init.lua goes here
      EOF
    '';
  };
in 
  {
    outputs = flake-utils.lib.eachDefaultSystem (_system: {
        packages.default = pkgs.mkShell {
          packages = with pkgs; [
            neovim-with-config
            # anything else
          ];

          shellHook = ''
            # Optionally, inject other stuff into your shell
            # environment.
          '';
        };
      });
  }

Since the shell requires neovim-with-config, its ‘build environment’ will append /nix/store/abc123-neovim-with-config/bin/ to $PATH. That’s exactly what we want.

And finally, source the output of nix print-dev-env:

# `print-dev-env` assumes bash. It mutates env variables such as
# `LINENO` that # are immutable in zsh, so I need to exclude them.
# This is annoying, but in practice works fine.
$ nix print-dev-env \
  | grep -v LINENO \
  | grep -v EPOCHSECONDS \
  | grep -v EPOCHREALTIME \
  > $HOME/development-configuration.zsh 

$ echo 'source $HOME/development-configuration.zsh' >> $HOME/.zshrc

If you inspect development-configuration.zsh, you’ll see a giant RC file that includes:

PATH='...:/nix/store/abc123-neovim-with-config/bin:...'

Indeed, running nvim works as expected. We have set up a development environment using Nix Flakes!

Full dotfiles

If you want to see my full dotfiles, it lives on sourcehut. Here’s a link to where I define packages.default and here’s where I run print-dev-env

Scoring an animation with Orca

In this post, I’ll demonstrate how to score an animation using Orca.

What is Orca?

Orca is an esoteric programming language for composing music. An Orca program is somewhere between a circuit diagram and an ASCII roguelike. But you don’t need to know either of those things to get started— in an interview with its creator Devine Lu Linvega of the programming duo Hundred Rabbits:

I was always kind of aiming, I guess, at children. I was like, if you can just open the page and put that in front of a kid, could they figure it out? It wouldn’t take that many keystrokes until they figure out which… like [the operator] E will start moving, and through the act of playing they’ll find their way without having to read the documentation.

— from Devine’s interview on the Future of Coding podcast

Some resources to get started:

The final score

The completed score, consisting of all the techniques described in this post.

Although it might seem complicated— especially if you’re not familiar with Orca— this program is actually the result of building on a few core ideas. As you read this, I hope it will feel like a natural progression to go from one step to the next.

Ok, now let’s start at the beginning.

What is a score?

Scoring an animation requires timing sounds to events on screen. For example, when a piece of glass shatters in the animation, there should be a crash sound. Let’s say this happens on frame 21. Using Orca, how do you play a sound on the 21st frame and then never again?

The answer to this was not obvious to me. Most Orca compositions, at the time of writing this, consisted of loops. I could not find any examples that did what I wanted. But since we have access to the clock frame using the C operator, this feels like it should be possible.

Here’s one way to do it:

Learning how to wait

Note: Throughout this post, I will refer to hexadecimal numbers using the prefix 0x. For example, 0x10 is decimal 16.

Note 2: I will refer to the last digit in a hexadecimal number as the “ones’ digit” and the second-to-last digit as the “sixteens’ digit.” For example, 0x10‘s ones’ digit is 0 and its sixteens’ digit is 1.

Ok, let’s begin.

First, use a pair of hexadecimal clocks Cf and fCf:

  • The frame count is at the bottom right, a monotonically increasing number.
  • Cf mods the frame count by 0xf, or 16, to output the ones’ digit, a number from 0x0 to 0xe.
  • fCf also divides the frame count by 0xf, or 16, to output the sixteens’ digit.

Next, check the outputs using F. The F operator will output a bang (*) if their inputs on either side are equal.

Finally, AND the outputs of both the Fs into a single *:

  • Y, the ‘yumper’ operator, just copies the input horizontally.
  • f here is a lowercase F that only operates on a bang.

On frame 0x15, the f outputs a bang.

Note: If the f were uppercase, it would incorrectly output a bang when both inputs are empty (i.e. when neither ones’ nor sixteens’ digits matched.)

At this point, I posted my timer to the Orca thread on lines, asking the community if there was a simpler way to do this. Devine responded, suggesting this condensed version that uses one fewer operator.

Limitations

Here, it is important to note that this timer does not actually “bang once and then never again” as originally promised. Since the timer only checks the ones’ and sixteens’ digits of the frame number, it will bang at 0x015, 0x115, 0x215, …— every 0x100 or 256 frames. This ends up being about 25 seconds at 120 bpm.

My animation happens to be 15s long, so this was not a problem for me. If anyone finds an elegant solution to the original problem as stated, I’d love to see it.

Wiring up sounds

Now, the fun part. Use this timer to schedule different sounds.

Playing a note

This can be used to play a single midi note.

Toggling a loop

First, consider a simple drum loop.

Then, use an X to set the note’s velocity, effectively turning the loop on or off. Here, it sets the velocity to 0x7.

It’s a bit silly to use X to set a constant value like this, but it should make sense in the next step.

Finally, automate this using timers:

  • The drum loop begins muted, its velocity 0x0.
  • The first timer at frame 0x15 turns on the drum loop by setting its velocity to 0xa.
  • The second timer at frame 0x28 turns it off again by setting the velocity to 0x0.

Timing a sequence of notes

Play a sequence of notes by combining operators Z and T:

  • The Z counts up from 1 to 5, exactly once.
  • This determines the note output by T.
  • The output of T is fed to :, the midi operator.

Use a timer to control when the sequence plays:

  • Z starts at 5, unlike the previous example, so it doesn’t immediately play.
  • The timer fires at frame 0x15 to activate x.
  • x sets the Z back to 0 and causes it to play.

Timers in practice

At this point, if you go back to the beginning and see the final composition, you may notice some differences from the examples. I omitted details to keep the examples small and easy to understand:

Conclusion

Here’s the completed looping animation:

A hand reaches to water a plant but drops the cup, which shatters. The plant droops, the mug pieces rise and hover in the air above the plant, before falling into the pot which we now see contains a worm. The worm is sliced in half by a piece of mug, its worm blood spreading around the dirt, before gathering and traveling up the stalk of the plant. Zooming in, we see a new stem emerge. Its bud sprouts, revealing it has grown pieces of the same mug fragments. Zooming out all the way, we see the plant is standing up once again. It has an extra stem, which the hand plucks. A worm squirms out of the pot and offscreen, and the entire animation loops.

(If you want to see more of my art, please follow my instagram.)

This is only a small taste of what Orca’s capable of doing, but I hope it’s a fun read. If you notice any mistakes in this article or want to share feedback, please reach out. I’d like to do more of these in the future.

Restic + Backblaze B2 on NixOS

While NixOS fully supports making restic backups using Backblaze, I couldn’t find documentation for it. From browsing configs on GitHub, many people seem to also use rclone but I’d rather not introduce another dependency.

Here’s how I did it:

{ config, pkgs, ... }:
{
  environment.systemPackages = [ pkgs.restic ];

  services.restic.backups.myaccount = {
    initialize = true;
    # since this uses an `agenix` secret that's only readable to the
    # root user, we need to run this script as root. If your
    # environment is configured differently, you may be able to do:
    #
    # user = "myuser
    #
    passwordFile = config.age.secrets.my_backups_pw.path;
    # what to backup.
    paths = ["/home/myusername"];
    # the name of your repository.
    repository = "b2:my_repo_name";
    timerConfig = {
      # backup every 1d
      OnUnitActiveSec = "1d";
    };


    # keep 7 daily, 5 weekly, and 10 annual backups
    pruneOpts = [
      "--keep-daily 7"
      "--keep-weekly 5"
      "--keep-yearly 10"
    ];
  };

  # Instead of doing this, you may alternatively hijack the
  # `awsS3Credentials` argument to pass along these environment
  # vars.
  #
  # If you specified a user above, you need to change it to:
  # systemd.services.user.restic-backups-myaccount = { ... }
  #
  systemd.services.restic-backups-myaccount = {
    environment = {
      B2_ACCOUNT_ID = "my_account_id_abc123";
      B2_ACCOUNT_KEY = "my_account_key_def456";
    };
  };

}

Overriding packages in NixOS

In NixOS, it’s sometimes desirable to override a package in order to extend or modify its behavior. For example, I override my Neovim to add plugins so they get all the benefits of being in the nix store. Here’s how I do it.

# in configuration.nix
nixpkgs.overlays = [
  (import ./overlays.nix)
];

# in overlays.nix
self: super: {
  neovim-mukund = self.callPackage ./packages/neovim-mukund.nix {};
}

# finally, in packages/neovim-mukund.nix
{ pkgs }:
  neovim.override {
    vimAlias = true;
    viAlias = true;
    configure = {
      packages.mukund-plugins = with pkgs.vimPlugins; {
        start = [
          ale
          fzf-vim
          # ...
        ];
      };
    };
  }

# putting it all together
environment.systemPackages = [
  neovim-mukund
];

Bonus: Installing a single package from main

If you need to install a single package from the main branch but keep the rest of your code on your nix channel (usually the main channel or nixos-unstable), then try this:

# in packages/neovim-mukund.nix
let neovim-master = (import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/master.tar.gz") {}).neovim
in
  environment.systemPackage = [
    neovim-master
  ]

This time, I’m fetching and installing from the master.tar.gz file. This is handy if there’s an update upstream that you want to use immediately. For example, I often use this when Discord releases an update. Nixpkgs usually merges the version bump fairly quickly, but it doesn’t reach the release channels for many days during which Discord is unusable.

References

Rust Magic

This is a list of places in Rust where implementing a trait or using a struct affects the syntax of your code. I think of these features as “magical” because using them can change the behavior of very basic Rust code (let, for-in, &, etc.).

What follows is a small list of (hopefully) illustrative examples, and a short epilogue pointing you to more articles if this topic interests you.


Contents


struct Foo {
    text: String,
}

impl Drop for Foo {
    fn drop(&mut self) {
        println!("{} was dropped", self.text);
    }
}

fn main() {
    let mut foo = Some(Foo {
        text: String::from("the old value"),
    });

    // this calls the drop() we wrote above
    foo = None;
}
struct MyCustomStrings(Vec<String>);

impl IntoIterator for MyCustomStrings {
    type Item = String;
    type IntoIter = std::vec::IntoIter<Self::Item>;

    fn into_iter(self) -> Self::IntoIter {
        self.0.into_iter()
    }
}

fn main() {
    let my_custom_strings = MyCustomStrings(vec![
        String::from("one"),
        String::from("two"),
        String::from("three"),
    ]);

    // We can use for-in with our struct
    //
    // prints "one", "two", "three"
    for a_string in my_custom_strings {
        println!("{}", a_string);
    }
}
use std::ops::Deref;

struct Smart<T> {
    inner: T,
}

// You can implement `DerefMut` to coerce exclusive references (&mut).
impl<T> Deref for Smart<T> {
    type Target = T;

    fn deref(&self) -> &Self::Target {
        &self.inner
    }
}

fn main() {
    let text = Smart {
        inner: String::from("what did you say?"),
    };

    // The `impl Deref` lets us invoke the `&str` method
    // `to_uppercase()` on a `&Smart<String>`
    println!("{}", &text.to_uppercase());
}
use std::fmt;

struct Goat {
    name: String,
}

impl fmt::Display for Goat {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "a goat named {}", self.name)
    }
}

fn main() {
    let goat = Goat {
        name: String::from("Yagi"),
    };

    // This invokes our `Display`'s `fmt()`
    println!("{}", goat);
}
#[derive(Clone, Copy, Debug)]
struct Point {
    x: usize,
    y: usize,
}

fn main() {
    let point_a = Point { x: 1, y: 2 };
    let point_b = point_a;

    // point_a is still valid because it was copied rather than moved.
    println!("{:?}", point_a);
}
// Notes:
// * This works very similarly with Option<T>
// * We need to derive(Debug) to use the error in a Result.
//
#[derive(Debug)]
struct SomeError;

fn uh_oh() -> Result<(), SomeError> {
    Err(SomeError)
}

fn main() -> Result<(), SomeError> {
    // The following line desugars to:
    //
    // match uh_oh() {
    //     Ok(v) => v,
    //     Err(SomeError) => return Err(SomeError),
    // }
    //
    uh_oh()?;

    Ok(())
}

Epilogue

When I first started compiling this list, I asked around in the Rust community discord. scottmcm from the Rust Language team introduced me to the concept of lang items. If you search for articles on this topic, you get some fantastic resources:

So what is a lang item? Lang items are a way for the stdlib (and libcore) to define types, traits, functions, and other items which the compiler needs to know about.

Rust Tidbits: What is a Lang Item? by Manish Goregaokar

Not all lang items are magical, but most magical things are lang items. If you want a deeper or more comprehensive understanding, I recommend reading Manish’s article in its entirety.

How to configure API Gateway v2 using Terraform

Here’s how you wire up an AWS lambda into an HTTP API using Terraform and AWS’s API Gateway v2 resources.

When you terraform apply this, it’ll spit out an API URL. You can GET / against that API URL to run your lambda:

resource "aws_iam_role" "plants" {
  name = "iam_plant_api"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "lambda.amazonaws.com"
        ]
      }
    }
  ]
}
EOF
}

# This presumes you have a zip file get_water_level.zip
# which contains a get_water_level.js file which exports
# a `handler` function
resource "aws_lambda_function" "get_water_level" {
  filename = "get_water_level.zip"
  function_name = "get_water_level"
  publish = true
  role = aws_iam_role.plants.arn
  handler = "get_water_level.handler"
  source_code_hash = filebase64sha256("get_water_level.zip")
  runtime = "nodejs12.x"
}

resource "aws_apigatewayv2_api" "plants" {
  name          = "http-plants"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_stage" "plants_prod" {
  api_id = aws_apigatewayv2_api.plants.id
  name = "$default"
  auto_deploy = true
}

resource "aws_apigatewayv2_integration" "get_water_level" {
  api_id = aws_apigatewayv2_api.plants.id
  integration_type = "AWS_PROXY"
  integration_method = "POST"
  integration_uri = aws_lambda_function.get_water_level.invoke_arn
}

resource "aws_apigatewayv2_route" "get_water_level" {
  api_id = aws_apigatewayv2_api.plants.id
  route_key = "GET /"
  target = "integrations/${aws_apigatewayv2_integration.get_water_level.id}"
}

resource "aws_lambda_permission" "get_water_level" {
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.get_water_level.arn
  principal     = "apigateway.amazonaws.com"
  source_arn = "${aws_apigatewayv2_stage.plants_prod.execution_arn}/*"
}

output "api_url" {
  value = aws_apigatewayv2_stage.plants_prod.invoke_url
}

Notes

  1. Anyone with the URL will be able to invoke your lambda. If you want access control or rate limiting, you’ll need to add that.

  2. Without the aws_lambda_permission, your API Gateway won’t have permission to invoke the lambda and it’ll 500.

  3. The aws_apigatewayv2_stage is a staging environment (e.g. development, production, test). You must have at least one stage, or else calls to your API will fail with “Not Found”.

  4. The aws_lambda_permission lets any route on your API’s $default stage invoke the lambda. If you want to restrict it to a particular route, you can make the source_arn more specific.

Resources

What's in a name?

I decided to catalogue examples of engineers– primarily software developers– being asked to change a name to avoid being racist, sexist, transphobic, ableist, or otherwise bigoted. About half of these examples come from responses to my AskMetaFilter question.

Note: This list is not any sort of exhaustive or representative sample. I offer it as a starting point for anyone interested in reading more about how tech communities respond. If you find any links that should be added, please don’t hesitate to send me an email.



There’s a TON of 2020 Pull Requests about this, so I am going to pick some of them.

2020

Panicking, unsafe, and you

In Jon Gjengset’s Demystifying Unsafe Code talk at Rust NYC, he gives a very interesting example of unsafe code. Here’s the link– please go and watch it– but I’ve transcribed it here along with my paraphrased explanation.

impl<T> Vec<T> {
  /// apply `f` to every element of `us`, and extend `self`
  /// with the result. for example:
  ///
  /// names.extend_map([user1, user2], |u| u.name())
  ///
  fn extend_map<U, F>(&mut self, us: &[U], mut f: F)
  where F: FnMut(&U) -> T {
    // reserve capacity in the Vec all at once, for perf (?)
    self.reserve(us.len());

    // set the length() manually.
    let cur_len = self.len();
    unsafe { self.set_len(cur_len + us.len()) };

    // insert the items by writing to the memory location
    let into = unsafe { self.as_mut_ptr().add(cur_len) };
    for u in us {
      unsafe { std::ptr::write(into, f(u)) }; into += 1;
    }
  }
}

If f panics, then Rust will unwind the stack and drop every item in Vec before dropping the Vec itself. However, since you’ve unsafely set its length with set_len(), it will try to call drop on array indices that you haven’t written to yet! In other words, it will just call garbage memory. Apparently, this is why it can be challenging to write Vec::drain() implementations.

A Few Things Like These (Ippatiyum Cila Vicayankal)

Among birds, I like crows very much.
It's true; it is a thieving creature
tactfully snatching away the eats from the hands of children.
In deed, it is a foolish creature
visiting and perching on the compound wall of the house
and caws at the oddest hours.
Even then
isn't it my friend
who looks at me and calls out to me
in my village where I crawled as a baby and grew up
and also in this city planted from elsewhere?

– Cinnakkapali (Translated by Nirmal Selvamony)

There were three or four crows standing on a branch in the trees outside our balcony, so this poem feels very timely.

Source: Oikopoetrics and Tamil Poetry by Nirmal Selvanomy