Running containers on the 'wrong' architecture.

Last Updated
#buildah #containers #emulation #linux #podman #qemu #skopeo

Containers are pretty useful, like a chroot that's been given extra powers, to isolate networking, user ids, pids, and all sorts of other things. And without the overhead of a whole virtual machine. However, that does limit them to running software that would actually run on your actual machine.

I use a ppc64le machine as a daily driver, so lots of things are 'awkard', if they assume that x86_64 is the only thing that exists. However, there is a tool that can emulate other CPUs and run programs in a different instruction set: qemu. You don't—it turns out—even need to emulate an entire hardware platform, or need special virtualization-specific support (that's just for speed). A qemu user emulation tool can run binaries for a foreign architecture (but otherwise compatible with your OS kernel) by emulating the CPU and translating system calls.

User emulation is already a lot like a container, and can make things like certain kinds of cross compiling or development work easier (as well as making it possible to run proprietary software on your architecture of choice).

On Gentoo, you need to have app-emulation/qemu set up to include the architectures you want to emulate in the QEMU_USER_TARGETS use_expand variable. (And for using this within containers, you want the +static-user useflag set as well).

Then, assuming you have a C program like this:

hello.c (Source)

#include <stdio.h>
int main() {
    printf("%s", "Hello from main()\n");
    return 0;

And you compile it for a different native and non-native architectures

$ gcc -o hello-ppc64le-dyn hello.c
$ gcc -o hello-ppc64le --static hello.c
$ x86_64-multilib-linux-gnu-gcc -o hello-x86_64-dyn hello.c
$ x86_64-multilib-linux-gnu-gcc -o hello-x86_64 --static hello.c

We can then run the two native ones, and see the expected output.

$ ./hello-ppc64le-dyn
Hello from main()
$ ./hello-ppc64le
Hello from main()
$ ./hello-x86_64
bash: ./hello-x86_64: cannot execute binary file: Exec format error
$ ./hello-x86_64-dyn
bash: ./hello-x86_64-dyn: cannot execute binary file: Exec format error

The qemu-x86_64 program can run these, however.

$ qemu-x86_64 ./hello-x86_64
Hello from main()
$ qemu-x86_64 ./hello-x86_64-dyn
Hello from main()

(Assuming the cross compilation setup has the right environment for the dynamically linked version)

The Linux kernel has a feature that allows configuring an interpreter for these binaries, in much the same way that you can embed a #! (shebang) line into a script: binfmt_misc.

This can be configured through various mechanisms on different distros, but on Gentoo the openrc init system supplies a /etc/init.d/binfmt service to enable that mapping by dropping a file into /etc/binfmt.d/.

Mine looks like this:

qemu-user.conf (Source)


So with that file in place and /etc/init.d/binfmt start (run as root, and possibly enabled as a boot service).

We can then just run the non-native executables 'normally':

$ ./hello-x86_64
Hello from main()
$ ./hello-x86_64-dyn
Hello from main()

This is pretty cool, and useful on its own, but you quickly have to figure out how to install dependencies and deal with a project's build system to really use it for much other than toy examples.

But we can use the static binaries directly in a container, if we want.

Containerfile.x86_64 (Source)

FROM scratch
COPY hello-x86_64 /
CMD [ "/hello-x86_64" ]

Then executing:

$ buildah bud --platform=linux/amd64 -t localhost/hello:x86_64 Containerfile.x86_64
STEP 1/3: FROM scratch
STEP 2/3: COPY hello-x86_64 /
STEP 3/3: CMD [ "/hello-x86_64" ]
COMMIT localhost/hello:x86_64
Getting image source signatures
Copying blob 0efe8d85f660 skipped: already exists
Copying config 6326d28447 done
Writing manifest to image destination
Storing signatures
--> 6326d284477
[Warning] one or more build args were not consumed: [TARGETARCH TARGETOS TARGETPLATFORM]
Successfully tagged localhost/hello:x86_64

We then get our container built, and can run it:

$ podman run --rm -it localhost/hello:x86_64
Hello from main()

And that works great, but if we try the dynamically linked version:

Containerfile.x86_64-dyn (Source)

FROM scratch
COPY hello-x86_64-dyn /
CMD [ "/hello-x86_64-dyn" ]
$ $ buildah bud --platform=linux/amd64 -t localhost/hello:x86_64 Containerfile.x86_64
STEP 1/3: FROM scratch
STEP 2/3: COPY hello-x86_64-dyn /
STEP 3/3: CMD [ "/hello-x86_64-dyn" ]
COMMIT localhost/hello:x86_64-dyn
Getting image source signatures
Copying blob 14266541c9b2 done
Copying config 6e4c2c5dfb done
Writing manifest to image destination
Storing signatures
--> 6e4c2c5dfbd
[Warning] one or more build args were not consumed: [TARGETARCH TARGETOS TARGETPLATFORM]
Successfully tagged localhost/hello:x86_64-dyn
$ podman run --rm -it localhost/hello:x86_64-dyn
qemu-x86_64: Could not open '/lib64/': No such file or directory

We get something different. The dynamic executable requires the dynamic linker within the container, and we only put the executable file there.

But we are able to execute the binaries through qemu-x86_64 just fine, even within the container, whose own root filesystem doesn't have the emulator to suport our native CPU.

But the dynamic linker would work if a whole chroot-style image was present in the container.

Doing a simple podman run --rm -it will pull an appropriate image for the architecture we're running on, and we can see that if we use skopeo to inspect the image locally:

$ podman run --arch ppc64le --rm -it
trying to pull
getting image source signatures
copying blob 6f8c2fc0a4f9 skipped: already exists
copying config fb133e10be done
writing manifest to image destination
storing signatures
root@6e00a22706a8:/# uname -m
$ skopeo inspect
 "name": "",
 "digest": "sha256:4e5d0032fffc670be1788b945476b5997b29b50aec6e84af7a76e8fb030c4326",
 "repotags": [],
 "created": "2022-08-02t01:31:14.74904496z",
 "dockerversion": "20.10.12",
 "labels": null,
 "architecture": "ppc64le",
 "os": "linux",
 "layers": [
 "env": [

But we can choose a non-native architecture, and assuming we configured the binfmt correctly and the image is available:

$ podman run --arch amd64 --rm -it
Trying to pull
Getting image source signatures
Copying blob d19f32bd9e41 skipped: already exists
Copying config df5de72bdb done
Writing manifest to image destination
Storing signatures
root@159ea7cdac2c:/# uname -m
$ skopeo inspect
 "Name": "",
 "Digest": "sha256:42ba2dfce475de1113d55602d40af18415897167d47c2045ec7b6d9746ff148f",
 "RepoTags": [],
 "Created": "2022-08-02T01:30:56.165288114Z",
 "DockerVersion": "20.10.12",
 "Labels": null,
 "Architecture": "amd64",
 "Os": "linux",
 "Layers": [
 "Env": [

Then we can see we were running the container with the non-native architecture just fine.

Using that container to build software, and to package for non-native architectures can all be done locally.

Vocabulary Types

Last Updated
#language #philosophy #programming #types


A vocabulary type is a term I've come across a few times in discussions of programming; "Vocabulary types" represent a category of programming language types.

As I've understood the term, a vocabulary type is one that is appropriate for use in interfaces; it is passed as a parameter, returned, or otherwise visible outside the immediate module (unit of composition of the software). The type must be usable in APIs, and so has some common (language-dependent) properties, which define the category.

I have not seen too much discussion about what those properties would be, and if any readers have sources that try to piece this together (or even sources mentioning vocabulary types in specific contexts), this is a relatively difficult line of research to search. (Both "vocabulary" and "type" show up in a lot of research and technical contexts unrelated to the concept of a "vocabulary type", so the search results are messy, at beast.)

What properties do vocabulary types share, then?

It should immediately be somewhat obvious that any primitive types in a language make good candidates for a vocabulary type; almost by default anything the language supplies as built-in (whether as part of the standard library or within the language itself) rises to this level.

These types are generally stable, as robust and well-tested as possible, and strictly defined. The types provided by the language environment are necessarily always available as well. A 'good' vocabulary type would be easily accessible. In the same way that using obscure language makes writing less accessible to a broad audience, using an obscure type as a vocabulary type in your software would make it less accessible to the broadest audience of programmers.

The analogy seems to be that the set of vocabulary types defines the 'dialect' of programming that you are doing. Beyond whichever specific language (whether C++, Rust, Python, or Haskell), physics simulations will have a common vocabulary. Both in the regular sense of having specialized terminology and jargon, and in the sense of "vocabulary types" with similar purposes. e.g. A 3d vector in a cartesian space, a 3d point in a cartesian space, affine transformations, etc.

Somewhat more fundamentally than domain specific vocabulary is the question: Are there programming-language agnostic, but programming-specific vocabulary types?

Can we then design better languages and software libraries by understanding these comonalities?

Concrete Examples

Details in various programming languages, of course, will still differ, but I think candidates for these sorts of types are:


fixed-size integers (modular/unsigned)
Examples: uint32_t, unsigned int
fixed-size integers (two's-complement/signed)
Examples: int16_t, int, long int
IEEE754 float32 and float64 (float128? 80-bit x87 floats?)
Examples: float, double
byte (an uninterpreted unit of memory that is 8-bits in size)
Examples: std::byte, char (usually), unsigned char
text string (whether unicode, ASCII, unspecified, represents human-language text)
Examples: std::string, char const*
Examples: boost::any, void*


Examples: std::optional, T*
homogenous sequence
Examples: int[], std::vector<int>, std::span<std::byte, 16>, std::array<float, 16>, generator<int>
key->value mapping between two specific types
Examples: std::map<int, std::string>, std::unordered_map<std::string, void*>

I'll admit a C++ bias in the above, as that feels to me the most precise way to name the ideas to me.

Further constraining to abstract types that aren't concrete programming language primitives at all can further simplify and rtestrict this idea of universal vocabulary types.

Perhaps the success of JSON (now standardized in RFC 8259) is just how clearly it constrains data to a universal set of broadly applicable vocabulary types:

Integral or floating point, exact integers are only guaranteed interoperable within the INT32 range, and floating point representation is likely only compatible if exact float64 is used.
A value usable in a conditional context selecting between two alternatives true and false.
A value that is present but otherwise empty/unspecified. (Analogous to an empty optional<T>, or a nullptr-valued T*
A value representing a sequence of Unicode codepoints in the printable range.
A sequence of any type, (homgenous in that the any type is the corresponding representation).
A mapping from string values to any.

Though rarely will a particular use cosntrain itself so heavily unless the goal is truly broad interoperability.

Externally restoring u-boot on the Talos II

#talos #u-boot

I live where a power outage, while not extremely common, happens often enough that I should have anything remotely valuable or sensitive hooked up to a UPS (not merely a surge protector). However, I recently relocated a bunch of my electronics, and my Talos II-based workstation currently lives in my home office without the benefit of a UPS.

I had a power outage that lasted only about 5 minutes, and appeared to be a straight blackout/cut followed by restoration. (Maintenance, cutover of a damaged circuit, whatever, no idea what happened)

This then, unfortunately, left my system unwilling to boot. The BMC LED on the mainboard would turn on at a quick flash when applying power, but the power buttons did not respond and the system would not completely turn on.

This implied the BMC, which controls power sequencing and bringup, was not functioning. After some troubleshooting to try to get a network connection into the BMC, it became apparent the BMC was at fault, somehow.

Eventually, that led me to this page on the Raptor Computing Wiki: describing the steps needed to restore functionality.

Firstly, I needed to get a serial connection to the BMC itself, which required ordering a bracket. It appears that no vendor actually specifies what pinout their brackets use, and at least on the Amazon listing I did order, the reviewer stated the pinout incorrectly.

Some jumper wires later, I got the serial connection hooked up and was able to see that on applying power, I got nothing but the SoC reset output. No u-boot output, nothing except the SoC continuously resetting.

So now I needed to find a way to reflash u-boot. The above-linked Wiki page shows a link to ASpeed (the SoC manufacturer) for a download of a "socflash" utility. However, in their desire to be as unfriendly as possible to anyone who won't pay them money, they've removed that tool from public download, and reserved it only for developer accounts (which require a sales contact to get).

There's a one-sentence statement that you can use flashrom to write the chip, with an external programmer, so that seemed like my best option, as I have a Segger J-Link which flashrom supports as a SPI programmer.

Now, to find the information I need on how to flash that chip. Since this is all fairly open, the chip is socketed, which means I was able to remove the flash chip for the BMC, and look up the pinout for it, and match that up against the flashrom documentation for using the J-link as a programmer.

Flash Chip Pin J-Link Pin Comments
15 (SI) 5 (TDI)
8 (SO) 13 (TDO)
15 (SCLK) 9 (TCK)
7 (CS#) 15 (RESET)
2 (VCC) 1 (VTREF) Also connected to 3.3V bench supply
10 (GND) 4 (GND)

Also of note is that when connecting pins on the J-link using an IDC cable, the numbering is mirrored relative to the key on the connector. (Pin numbering is for the pins on the J-Link itself)

With this connected, I was able to convince flashrom to read the existing image from the flash and verify that connectivity was working and the chip was recognized:

$ ./flashrom/flashrom -p jlink_spi -r bmc-rom
flashrom v1.1-rc1-115-g61e16e5-dirty on Linux 5.3.0-gentoo (x86_64)
flashrom is free software, get the source code at

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Found Macronix flash chip "MX25L25635F/MX25L25645G" (32768 kB, SPI) on jlink_spi.
Reading flash... done.

Now I just needed the flash image and some idea of how to write it.

Downloading the 1.07 firmware from the wiki, I ended up with a few files, and didn't know what to do with them.

Thankfully a few folks on the #talos-workstation channel on FreeNode were able to sort me out, providing the necessary context for me to know that the image-bmc file in the firmware download is the entire flash image combined, along with providing a link to the kernel DTS ( which shows the range/offsets of the flash regions on the chip.

(Thanks to mearon, dormito, and hanetzer)

Thus I was able to construct a flashrom layout file:

00000000:0005ffff u-boot
00060000:0007ffff u-boot-env
00080000:0067ffff kernel_a
00680000:0077ffff dev-data
00780000:00d7ffff kernel_b
00d80000:01ffffff rwfs

Caveat with the above, I only tested writing to the u-boot section, so please use at your own risk and with a full understanding of what you are trying to accomplish.

And with some trepidation, write the u-boot section with the following command:

$ ./flashrom/flashrom -p jlink_spi -l bmc.layout -i u-boot -w image-bmc
flashrom v1.1-rc1-115-g61e16e5-dirty on Linux 5.3.0-gentoo (x86_64)
flashrom is free software, get the source code at

Using region: "u-boot".
Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Found Macronix flash chip "MX25L25635F/MX25L25645G" (32768 kB, SPI) on jlink_spi.
Reading old flash chip contents... done.
Erasing and writing flash chip... Erase/write done.
Verifying flash... VERIFIED.

Leading to success, thankfully.

A bit of careful re-seating of the chip into the motherboard socket and I was able to get a fully booted BMC, and the rest of my system again, no worse for wear otherwise.

Forcing Firefox keyboard shortcuts the hard way


Firefox's Quantum update provides some significant improvements in performance and other general underlying technology things (that I don't fully understand, as I'm not a Mozilla developer). However, in their eagerness to throw away as much work from the community as possible, they crippled the ability of addons to cutomize the interface, instead turning addons into nothing more than glorified greasemonky scripts.

A lot of the interface customizations I want are possible with userChrome.css modifications, which I'm relatively comfortable with doing, it's limited to the specific profile in use, etc. It'd be better if there was a way to automate or any sensible documentation on that, but it's open source, you read the code and figure it out.

On the other hand, Mozilla seems hell-bent on completely removing the ability to change the keyboard shortcuts in any way. There are half-baked solutions that can only change shortcuts within the page, or only work if you're not on a "secure" page, whatever the hell that is. (i.e. it's fine to change the browser's behavior in GMail, but addons aren't allow to change the browser's behavior on the addon page... as if "inconsistently and partially" customizable is what anyone actually wants)

There's some information on how to modify the sources to change keyboard shortcuts, but it doesn't appear to have been updated for the latest releases. Firefox 64, in my case.

Now, since I'm running Gentoo, it's not a terribly difficult thing to patch the source, User patches are well supported, so all I need is a patch that changes the shortcuts how I want. Still not actually configurable, but at least I can make it behave.

listings/newtab.patch (Source)

diff -dupbr firefox-64.0/browser/base/content/ firefox-64.0-edit/browser/base/content/
--- firefox-64.0/browser/base/content/  2018-12-06 21:56:20.000000000 -0500
+++ firefox-64.0-edit/browser/base/content/ 2018-12-26 10:41:18.293000000 -0500
@@ -116,7 +116,7 @@
    <key id="key_newNavigator"
-         modifiers="accel" reserved="true"/>
+         modifiers="accel,shift" reserved="true"/>
    <key id="key_newNavigatorTab" key="&tabCmd.commandkey;" modifiers="accel"
         command="cmd_newNavigatorTabNoEvent" reserved="true"/>
    <key id="focusURLBar" key="&openCmd.commandkey;" command="Browser:OpenLocation"
@@ -293,7 +293,7 @@
    <key id="key_undoCloseTab" command="History:UndoCloseTab" key="&tabCmd.commandkey;" modifiers="accel,shift"/>
-    <key id="key_undoCloseWindow" command="History:UndoCloseWindow" key="&newNavigatorCmd.key;" modifiers="accel,shift"/>
+    <!-- <key id="key_undoCloseWindow" command="History:UndoCloseWindow" key="&newNavigatorCmd.key;" modifiers="accel,shift"/> -->
#ifdef XP_GNOME
diff -dupbr firefox-64.0/browser/locales/en-US/chrome/browser/browser.dtd firefox-64.0-edit/browser/locales/en-US/chrome/browser/browser.dtd
--- firefox-64.0/browser/locales/en-US/chrome/browser/browser.dtd   2018-12-06 21:56:20.000000000 -0500
+++ firefox-64.0-edit/browser/locales/en-US/chrome/browser/browser.dtd  2018-12-26 10:38:05.298000000 -0500
@@ -104,8 +104,8 @@ These items have the same accesskey but
<!ENTITY  listAllTabs.label      "List all tabs">
<!ENTITY tabCmd.label "New Tab">
-<!ENTITY tabCmd.accesskey "T">
-<!ENTITY tabCmd.commandkey "t">
+<!ENTITY tabCmd.accesskey "N">
+<!ENTITY tabCmd.commandkey "n">
<!-- LOCALIZATION NOTE (openLocationCmd.label): "Open Location" is only
displayed on OS X, and only on windows that aren't main browser windows, or
when there are no windows but Firefox is still running. -->

What I want is for the New Tab action to be bound to C-T not C-N and to have New Window (on the off chance I ever use it) moved to C-S-N. (Unbinding entirely the Undo Close Window accelerator. Who uses that?)

Adding rocker mouse guestures would be nice, but I can live without the rest of the changes I've used in previous versions. (Mostly holdovers from when I used Opera.)

Installing an OS on the Talos II

#gentoo #linux #ppc64 #ppc64le #talos

Installing an OS on the Talos II

The goal is to get a Gentoo system going.

Starting from a ppc64le fedora install ISO, grab the kernel and initrd and place them in a tftp-accessible directory.

Get the contents of the ISO accessible via http, and boot the Talos II connected to the serial console from the BMC.

Add to petitboot a custom entry: - kernel tftp:// - initrd tftp:// - Boot arguments: repo= inst.vnc

Where tftp parts are from ppc/ppc64/ from the install ISO. And the Fedora ppc64le ISO is loop mounted at /fedora on the HTTP server.

Boot the Fedora installer, watching the serial console.

Connect to provided vnc IP address.

Select minimal install.

Add a prepboot partition to make the installer happy.

Add a single 15G partition for / (as ext4)

Change hostname in network config to "talos-fedora", leave automatic networking set up.

Begin installation.

Set root password and create (administrative) user.

Wait for install to complete.

Reboot installer, watch serial console.

Fedora installation now shows in petitboot, select it.

Now you can log in over SSH, rather than the serial console, and we can try to figure out how to get Gentoo on this thing.

Fedora doesn't seem to know about the VGA or the PCI video card, so no direct monitor keyboard yet:

Linux talos-fedora 4.18.7-200.fc28.ppc64le #1 SMP Mon Sep 10 15:21:50 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux

Create a new 20G ext4 partition for a Gentoo prefix, to use to bootstrap the new system, mount it on /prefix.

  • Install gcc
  • Install gcc-c++
  • Install tar
  • Install make
  • Install bzip2
  • Install python (python2)
  • Install pax-utils (for scanelf)

(Install htop for a pretty view of the system while you wait)

Follow to make a prefix.

Takes a bit, then fails because it can't select a profile (unknown arch).

$ cd /prefix/etc/portage
$ ln -s ../../usr/portage/profiles/default/linux/powerpc/ppc64/17.0/64bit-userland/little-endian/ make.profile

And try again.

Now it stops up at installing baselayout-prefix because it's not keyworded:

The following keyword changes are necessary to proceed:
 (see "package.accept_keywords" in the portage(5) man page for more details)
 # required by sys-apps/baselayout-prefix (argument)
 =sys-apps/baselayout-prefix-2.2-r5 **

So let's add that, but this is early in bootstrapping so it needs to live in /prefix/tmp/etc/portage/package.accept_keywords.

Next failure is:

* QA Notice: the following files are outside of the prefix:
* /usr
* /usr/bin
* /usr/bin/gcc-config
* /usr/lib64
* /usr/lib64/misc
* /usr/lib64/misc/gcc-config
* ERROR: sys-devel/gcc-config-1.8-r1::gentoo failed:
*   Aborting due to QA concerns: there are files installed outside the prefix

Unpleasant. Let's unmask 2.0, which seems to be happier.

GCC fails to bootstrap, appears to be this issue:

Let's unmask GCC 8.2 and see how it goes.

Now it looks like binutils is having trouble, configure is failing, with the config.log showing this:

/prefix/var/tmp/portage/sys-devel/binutils-2.30-r2/work/binutils-2.30/configure: line 4391: ./a.out: Too many levels of symbolic links

Which seems just a bit nuts, the path to the workdir does not contain any symbolic links.

Running the configure line from config.log, while in the build directory appears to work:

[enimihil@talos-fedora build]$ /prefix/var/tmp/portage/sys-devel/binutils-2.30-r2/work/binutils-2.30/configure --enable-gold --enable-plugins --disable-nls --with-system-zlib --build=powerpc64le-unknown-linux-gnu --enable-secureplt --enable-default-hash-style=gnu --prefix=/prefix/usr --host=powerpc64le-unknown-linux-gnu --target=powerpc64le-unknown-linux-gnu --datadir=/prefix/usr/share/binutils-data/powerpc64le-unknown-linux-gnu/2.30 --datarootdir=/prefix/usr/share/binutils-data/powerpc64le-unknown-linux-gnu/2.30 --infodir=/prefix/usr/share/binutils-data/powerpc64le-unknown-linux-gnu/2.30/info --mandir=/prefix/usr/share/binutils-data/powerpc64le-unknown-linux-gnu/2.30/man --bindir=/prefix/usr/powerpc64le-unknown-linux-gnu/binutils-bin/2.30 --libdir=/prefix/usr/lib64/binutils/powerpc64le-unknown-linux-gnu/2.30 --libexecdir=/prefix/usr/lib64/binutils/powerpc64le-unknown-linux-gnu/2.30 --includedir=/prefix/usr/lib64/binutils/powerpc64le-unknown-linux-gnu/2.30/include --enable-obsolete --enable-shared --enable-threads --enable-relro --enable-install-libiberty --disable-werror --with-bugurl= --with-pkgversion=Gentoo 2.30 p2 --disable-static --disable-gdb --disable-libdecnumber --disable-readline --disable-sim --without-stage1-ldflags

A make in the build directory also appears to work, but the stage3 bootstrap does not seem to be able to successfully run the configure.

Manually running the stage3 and stage2 bootstrap steps gives me an error telling me it cannot fine emege, but running stage1 appears to try to do something:

$ ./ /prefix stage1
* patch-2.7.5 successfully bootstrapped
* stage1 successfully finished
[enimihil@talos-fedora ~]$ ./ /prefix stage2
* Bootstrapping Gentoo prefixed portage installation using
* host:   powerpc64le-unknown-linux-gnu
* prefix: /prefix
* ready to bootstrap stage2
!!! emerge not found, did you bootstrap stage1?
[enimihil@talos-fedora ~]$

So, back to the automatic version...

Still fails, same configure issue; how about unmasking a newer binutils? This one needs to be unmasked in the prefix itself /prefix/etc/portage/....

Same issue, however.

Okay, so let's just give up on the prefix thing entirely and try cross-building from an existing amd64 Gentoo system.

Get a crossdev build set up as described in for the powerpc64le-linux-gnu target.

What we want to do is build @system and the stage1 packages as binpkgs (pass -b to emerge) so we can manually populate a stage1-like chroot to bootstrap Gentoo.

I ended up with the following versions of the cross-merged toolchain: - cross-powerpc64le-linux-gnu/binutils-2.31.1 - cross-powerpc64le-linux-gnu/gcc-8.2.0-r2 - cross-powerpc64le-linux-gnu/glibc-2.27-r6 - cross-powerpc64le-linux-gnu/linux-headers-4.17

Having finished the cross merge of @system I now have a 163 tarballs in /usr/powerpc64le-linux-gnu/packages.

So now, on my temporary Fedora install:

[root@talos-fedora prefix]# for pkg in $(ssh enimihil@ "find /usr/powerpc64le-linux-gnu/packages/ -name *tbz2"); do ssh enimihil@ "cat $pkg" | tar xjf -; done

And I should have a Gentoo chroot that is cross-built enough to get a system going.

So create/mount /proc, /sys, and bind-mount /dev before chrooting into the newly cross-built Gentoo system:

[root@talos-fedora ~]# chroot /prefix /bin/bash
talos-fedora / #

Now we should be able to follow the handbook to install/bootstrap a complete system:

talos-fedora / # emerge -av @system
setlocale: unsupported locale setting
!!! Section 'gentoo' in repos.conf has location attribute set to nonexistent directory: '/usr/portage'
!!! Invalid Repository Location (not a dir): '/usr/portage'
portage: 'portage' user or group missing.
     For the defaults, line 1 goes into passwd, and 2 into group.
*** WARNING ***  For security reasons, only system administrators should be
*** WARNING ***  allowed in the portage group.  Untrusted users or processes
*** WARNING ***  can potentially exploit the portage group for attacks such as
*** WARNING ***  local privilege escalation.

setlocale: unsupported locale setting

!!! /etc/portage/make.profile is not a symlink and will probably prevent most merges.
!!! It should point into a profile within /usr/portage/profiles/
!!! (You can safely ignore this message when syncing. It's harmless.)

!!! Your current profile is invalid. If you have just changed your profile
!!! configuration, you should revert back to the previous configuration.
!!! Allowed actions are limited to --help, --info, --search, --sync, and
!!! --version.
talos-fedora / #

Alright, so some work to do, let's get the portage snapshot and set up the minimum /etc/passwd stuff, and a working /etc/resolv.conf. (Using, ew, nano.), then emerge --sync

Now that we have a portage tree, let's set a profile:

talos-fedora / # eselect profile list
Available profile symlink targets:
[1]   default/linux/powerpc/ppc64/13.0/64bit-userland (exp)
[2]   default/linux/powerpc/ppc64/13.0/64bit-userland/desktop (exp)
[3]   default/linux/powerpc/ppc64/13.0/64bit-userland/desktop/gnome (exp)
[4]   default/linux/powerpc/ppc64/13.0/64bit-userland/desktop/gnome/systemd (exp)
[5]   default/linux/powerpc/ppc64/13.0/64bit-userland/developer (exp)
[6]   default/linux/powerpc/ppc64/13.0/64bit-userland/little-endian (exp)
[7]   default/linux/powerpc/ppc64/13.0/64bit-userland/little-endian/systemd (exp)
[8]   default/linux/powerpc/ppc64/17.0/64bit-userland (stable)
[9]   default/linux/powerpc/ppc64/17.0/64bit-userland/desktop (stable)
[10]  default/linux/powerpc/ppc64/17.0/64bit-userland/desktop/gnome (stable)
[11]  default/linux/powerpc/ppc64/17.0/64bit-userland/desktop/gnome/systemd (stable)
[12]  default/linux/powerpc/ppc64/17.0/64bit-userland/developer (stable)
[13]  default/linux/powerpc/ppc64/17.0/64bit-userland/little-endian (exp)
[14]  default/linux/powerpc/ppc64/17.0/64bit-userland/little-endian/systemd (exp)
[15]  hardened/linux/powerpc/ppc64/64bit-userland (dev)
talos-fedora / # eselect profile set --force 13

Apparently it's an experimental one, to be using little-endian.

Since it's complaining about unsupported locale, I unset LANG, then, try to get the system going.

Turns out we need app-portage/elt-patches to build sys-devel/binutils, and that needs a /bin/sh symlink so.

talos-fedora / # ln -s /bin/bash /bin/sh
talos-fedora / # emerge -av --nodeps app-portage/elt-patches
talos-fedora / # emerge -av --nodeps sys-devel/binutils

Or, nevermind, we don't have a gcc yet, making sure I have binpkgs for those.. (on my cross-building machine)

sudo powerpc64le-linux-gnu-emerge -avb sys-devel/binutils sys-devel/gcc sys-libs/glibc

And run the same command as above to splat all the binpkgs into the chroot.

Let's add some options to portage in this chroot as well (in /etc/portage/profile/make.conf):

EMERGE_DEFAULT_OPTS="--quiet-unmerge-warn --jobs --load-average=32

That should let us take a bit more advantage of the power of this machine.

And let's try that binutils merge again...

Nope, not quite. We're missing awk and gcc-config seems very unhappy, so let's do this:

talos-fedora /etc/env.d/gcc # ln -s /usr/bin/gawk /usr/bin/awk
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-gcc /usr/bin/gcc
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-as /usr/bin/as
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-ld /usr/bin/ld
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-g++ /usr/bin/g++
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-ar /usr/bin/ar
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-nm /usr/bin/nm
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-ranlib /usr/bin/ranlib
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-cpp /usr/bin/cpp
talos-fedora /etc/env.d/gcc # ln -s /usr/bin/powerpc64le-linux-gnu-c++ /usr/bin/c++

Now, let's see if a compile will work? nope. Seems like binutils is also wonky.

Things aren't being installed quite like I expected, gcc-config thinks the compiler is a cross compiler, etc.

Maybe we're supposed to be using powerpc64le-unknown-linux-gnu?

Okay... I'll try that:

enimihil@tbd ~$ sudo crossdev -t powerpc64le-unknown-linux-gnu -s4

Splatting the binary files seems to make things somewhat happier, but it does appear to need some manual intervention to get gcc-config and binutils-config to recognize a valid profile.

I added the /etc/env.d/binutils/config-powerpc64le-unknown-linux-gnu and the corresponding /etc/env.d/gcc/... file to point to the correct profiles, and ran env-update and . /etc/profile and the compiler now seems reasonably happy to chunk along with building a non-cross compiled binutils:

emerge -av --nodeps sys-devel/binutils

Adding ~sys-devel/gcc-8.2.0 to /etc/portage/profile/package.accept_keywords allowed me to:

emerge -av --nodeps sys-devel/gcc

( GCC 7.2.0 fails to build )

Working our way towards getting an @system merge:

emerge -av --nodeps autoconf
emerge -av --nodeps sys-libs/pam
emerge -av --nodeps dev-util/pkgconfig

And now:

talos-fedora / # mount -t devpts pts /dev/pts -o gid=5
talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -av1 @system

Fails because libffi is picky about being installed on top of itself:

talos-fedora / # rm -rf /usr/lib64/libffi*
talos-fedora / # emerge -av1 libffi
talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -avu1 --keep-going @system

Okay, now coreutils won't build:

* ERROR: sys-apps/coreutils-8.29-r1::gentoo failed (prepare phase):
*   patch -p1  failed with /var/tmp/portage/sys-apps/coreutils-8.29-r1/work/patch/003_all_coreutils-gentoo-uname.patch


emerge --sync

Nope, unmask the latest and try?:

# Add ~sys-apps/coreutils-8.30 to accept_keywords
talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -av1 sys-apps/coreutils

Nope, same issue, though apparently since this is a Gentoo patch that is failing:

talos-fedora / # USE="-acl -gdbm -berkdb -nls vanilla" emerge -av1 sys-apps/coreutils

Does the trick, let's keep going:

talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -avu1 --keep-going @system

Hm, now patch itself is failing:


No vanilla use flag to fall back on here...:

talos-fedora / # cd /var/tmp/portage/sys-devel/patch-2.7.6-r2/work/patch-2.7.6
talos-fedora /var/tmp/portage/sys-devel/patch-2.7.6-r2/work/patch-2.7.6 # patch -p1 /var/tmp/portage/sys-devel/patch-2.7.6-r2/files/patch-2.7.6-fix-test-suite.patch
patch: /lib64/ version `ATTR_1.3' not found (required by patch)

Ah, okay. We've broken patch. Great. And we need a working patch to build patch. sigh

So on my cross build machine, let's try this:

tbd /net/tbd/home/enimihil # USE="-xattr" powerpc64le-unknown-linux-gnu-emerge -avb patch

talos-fedora / # exit
[root@talos-fedora prefix]# scp enimihil@*tbz2 .
patch-2.7.6-r2.tbz2                                          100%  172KB  32.7MB/s   00:00
[root@talos-fedora prefix]# tar xvpf patch-2.7.6-r2.tbz2

bzip2: (stdin): trailing garbage after EOF ignored
[root@talos-fedora prefix]# chroot . /bin/bash
talos-fedora / # patch
talos-fedora / #

That seems to have done it, now let's build patch:

talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -avu1 sys-devel/patch

And see if the rest of @system will build now...:

talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -avu1 --keep-going @system

Broke because sysvinit couldn't find a root group, so I added that to /etc/group.

... and now, it seems, the linker is broken because it wants a GLIBC symbol version that doesn't exist:

talos-fedora / # gcc test.c
/usr/lib/gcc/powerpc64le-unknown-linux-gnu/8.2.0/../../../../powerpc64le-unknown-linux-gnu/bin/ld: /lib64/ version `GLIBC_2.27' not found (required by /usr/lib/gcc/powerpc64le-unknown-linux-gnu/8.2.0/../../../../powerpc64le-unknown-linux-gnu/bin/ld)
collect2: error: ld returned 1 exit status

Isn't symbol versioning supposed to not break things?:

talos-fedora / # ls -l /lib64/libc-*
-rwxr-xr-x 1 root root 2110680 Sep 19 15:07 /lib64/
-rwxr-xr-x 1 root root 2176288 Sep 18 23:08 /lib64/
talos-fedora / # ls -l /lib64/
lrwxrwxrwx 1 root root 12 Sep 19 15:07 /lib64/ ->
talos-fedora / #

Not for a downgrade, I guess.

So let's fix the symlink and keyword glibc?

Or, not:

talos-fedora / # ln -sf /lib64/
talos-fedora / # ls

We probably need to symlink all the glibc libs, then:

[root@talos-fedora prefix]# ls lib64/*-2.27*
lib64/               lib64/          lib64/
lib64/           lib64/           lib64/
lib64/  lib64/         lib64/
lib64/             lib64/  lib64/
lib64/          lib64/      lib64/
lib64/         lib64/     lib64/
[root@talos-fedora prefix]#

[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# ln -sf lib64/
[root@talos-fedora prefix]# chroot . /bin/bash
talos-fedora / #

Added ~sys-libs/glibc-2.27 to my package.accept_keywords and re-merged that:

talos-fedora / # emerge -av1 sys-libs/glibc

I guess crossdev (unsurprisingly) uses the ~arch versions of packages.

And back to the @system:

talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -avu1 --keep-going @system

Alright, seems like we need /dev/shm to build python:

* configure has detected that the sem_open function is broken.
* Please ensure that /dev/shm is mounted as a tmpfs with mode 1777.
* ERROR: dev-lang/python-3.6.5::gentoo failed (configure phase):
*   Broken sem_open function (bug 496328)

talos-fedora / # mount -t tmpfs tmpfs /dev/shm
talos-fedora / # USE="-acl -gdbm -berkdb -nls" emerge -avu1 --keep-going @system

Okay, excellent, now we've gotten the chroot bootstrapped into a sensible-ish state, let's try to update it without the USEflag workarounds:

emerge -avuDN world

Now that we have something of a fraken-installed system, let's see if we can use catalyst to build a proper stage3 and install a clean system, based on

emerge -av catalyst
# And because I'm sick of nano
emerge -av vim

Since there's no seed stage to start with, let's tar up the chroot filesystem as the stage3...:

[root@talos-fedora prefix]# tar --one-file-system -cjpf /current-stage3-ppc64le-20180919.tar.bz2
[root@talos-fedora prefix]# mv /current-stage3-ppc64le-20180919.tar.bz2 /prefix/var/tmp/catalyst/builds/default/

And having made a catalyst snapshot and editing a stage1.spec

talos-fedora / # catalyst -f stage1.spec
Catalyst, version 2.0.18
Copyright 2003-2008 Gentoo Foundation
Copyright 2008-2012 various authors
Distributed under the GNU General Public License version 2.1

Using default Catalyst configuration file, /etc/catalyst/catalyst.conf
Setting sharedir to config file value "/usr/lib64/catalyst"
Setting snapshot_cache to config file value "/var/tmp/catalyst/snapshot_cache"
Setting hash_function to config file value "crc32"
Setting storedir to config file value "/var/tmp/catalyst"
Setting portdir to config file value "/usr/portage"
Setting distdir to config file value "/usr/portage/distfiles"
Setting options to config file value "autoresume bindist kerncache pkgcache seedcache snapcache"
Autoresuming support enabled.
Binary redistribution enabled
Kernel cache support enabled.
Package cache support enabled.
Seed cache support enabled.
Snapshot cache support enabled.
Envscript support enabled.

!!! catalyst: Argument "compression_mode" not recognized.
    Also: Argument "decompressor_search_order" not recognized.
    Also: Argument "portage_prefix" not recognized.

Traceback (most recent call last):
File "/usr/lib64/catalyst/catalyst", line 216, in build_target
    mytarget=targetmap[addlargs["target"]](conf_values, addlargs)
File "modules/", line 17, in __init__
File "modules/", line 22, in __init__
File "modules/", line 10, in __init__
File "/usr/lib64/catalyst/modules/", line 689, in addl_arg_parse
    raise CatalystError, '\n\tAlso: '.join(messages)
!!! catalyst: Error encountered during run of target stage1
Catalyst aborting....

So, clearly the wiki page is crazy, or the files are not up to date, or... something.

talos-fedora / # catalyst -f stage1.spec
Catalyst, version 2.0.18
Copyright 2003-2008 Gentoo Foundation
Copyright 2008-2012 various authors
Distributed under the GNU General Public License version 2.1

Using default Catalyst configuration file, /etc/catalyst/catalyst.conf
Setting sharedir to config file value "/usr/lib64/catalyst"
Setting snapshot_cache to config file value "/var/tmp/catalyst/snapshot_cache"
Setting hash_function to config file value "crc32"
Setting storedir to config file value "/var/tmp/catalyst"
Setting portdir to config file value "/usr/portage"
Setting distdir to config file value "/usr/portage/distfiles"
Setting options to config file value "autoresume bindist kerncache pkgcache seedcache snapcache"
Autoresuming support enabled.
Binary redistribution enabled
Kernel cache support enabled.
Package cache support enabled.
Seed cache support enabled.
Snapshot cache support enabled.
Envscript support enabled.

!!! catalyst: Unknown build machine type ppc64le

Traceback (most recent call last):
File "/usr/lib64/catalyst/catalyst", line 216, in build_target
mytarget=targetmap[addlargs["target"]](conf_values, addlargs)
File "modules/", line 17, in __init__
File "modules/", line 96, in __init__
raise CatalystError, "Unknown build machine type "+buildmachine
!!! catalyst: Error encountered during run of target stage1
Catalyst aborting....

Ah, and herein we learn that catalyst has no idea about ppc64le.

I guess we'll muddle along with the franken-install, then, actually make that bootable.

So we'll follow the handbook for a chrooted install, skipping a bootloader, as we've got petitboot already.

Getting a kernel set up and manually configuring petitboot appears to get us booted, ta-da.

My response to the FCC to proceeding 17-108



I submitted this as an official filiing to the FCC (confirmation number 20170530680421266) as my personal take on why the proposed changes are wrong for consumers and the industry. The full response is included below, you may want to have a copy of the proceeding open as well, as I go through and response to each paragraph (that I chose to respond to) by number.

My Filing

As a professional in the telecommunications industry, I find these proposed changes obnoxious to the public interest. The central assumption seems to be that a functioning market in Internet Service Providers exists, and has served us well for the past 20 years under the previous regulatory framework (before the Title II Order).

This is not the case, and policies that depend on market competition require a functioning market to be effective; that is, a market with a low cost of entry, a plethora of competing offerings (not just two or three) and low friction to changing which offering to buy. This is decidedly not the case with Internet Service in any part of the United States I am aware of. Wo while the past years have seen a meteoric rise in Internet services and innovation, it i smy opinion that this is in spite of the regulatory environment, rather than caused by it.

In Paragraph 27, the commission makes a grave and dangerous error in stating that a suer "posting on social media" or "drafting a blog" is engaged in making use of an information service provided by their broadband Internet provider. The information service is that of the social media service, or the blogging platform, not of the user's ISP. The only service of their ISP being used is the transmission of the data (which the user has chosen) to the point they have determined (the provider of the information service) without any modification.

In particular, the broadband provider does not "offer" the social media service to its subscribers, the social media company does so. As, likewise, the blogging platform offers their suers the "capability" to publish and make available information online. The broadband provider does not give or grant this capability by merely connecting their users' computer to the information services' computers — the provider of the blogging service is essential to the process.

By analogy with a traditional telephone system, calling a customer service number does not imply your telephone provider offers an information service because the call center provides an automated process for you to access your account information. The service provided by your phone company is telecommunications; the service provided by the call center is an information service.

This assessment is necessarily from the perspective of the user, as the text of the act specifically describes the classifications in terms of the offering — the service that is purchased.

"Can broadband users indeed access these capabilities?"

The emphasis on access is important; yes, users of broadband Internet services can access those capabilities — but those capabilities are not offered by their broadband provider.

"Are there other capabilities that a broadband Internet user may receive with service?"

Yes, substantially many other capabilities are permitted by broadband service today. These include direct interconnection over VPN or other remote access technologies, of the users' equipment across the Internet, a fully transparent telecommunications service. As well as access to service providers that provide transit and connectivity to other networks, like mobile SMS (usually via email gateways) or Voice over IP services that interconnect with the PSTN (and then are subject to common carriage rules).

"If broadband Internet access service does not afford one of the listed capabilities to users, what effect woudl that have on our statutory analysis?"

It is my opinion that when a provider of broadband Internet does not themselves offer the information service the user is accessing, the provider is not "offering" the information service, but a telecommunications service.

Importantly, if the broadband provider is not responsible to support or engage the user in the use of an information service, they are not offering that service.

This, in context, it may be appropriate to classify some services provided with broadband Internet as information services, but that when not used to access those specific services offered to their users, the broadband provider is providing a telecommunications service, and must abide by those rules.

In Paragraph 28

"We seek comment on how consumers are using broadband Internet access service today"

I se my broadband Internet service only as a transit connection to the Internet. I do not use any service provided by my Internet provider including DNS resolvers, for which I use my own equipment. I find the conclusion that the entirety oof my use constitutes either an information service or a telecommunications service exclusively to be preposterous. Even were I to use my ISPs DNS resolvers, the limit of the information service that I am using is the DNS service itself. My request to retrieve a "tweet" is not serviced by my ISP, but by Twitter.

"Is the consumer capable of accessing these online services without Internet access?"

No, in many cases the services offered by third parties are offered only through the Internet. This question is the analog of "are consumers capable of making phone calls without telephone service" for which the answer is also "No," and which has no bearing on whether the telephone service itself is an information or telecommunications service.

"Could a consumer access these online services using traditional telecommunications services like telephone service or point-to-point special access?"

In principle, this is entirely possible — the information service provided by Twitter, for example, can be accessed via SMS messaging in mobile networks, and nothing except cost and pragmatic considerations prevents private individuals to establish direct communications links to their preferred email, or social networking providers.

"...are we correct that offering Internet service is precisely the service capable..."

You are correct in this assessment. Internet service may be required to access many of the information services consumers use, but this is a technological convenience, not a strict requirement. In the absence of the information service providers, Internet service would not itself provide those capabilities. It is important to understand that ISPs have often offered both information and telecommunications services, as a direct recognition that the pure telecommunications offering is not always sufficient. As more and better services have been made available by companies other than ISPs, the need (and even demand) for those supplementary services has rapidly diminished.

In Paragraph 29

In direct conflict with the statement that "broadband Internet suers do not typically specify 'points' between which and among which information is sent online" is the prevalent and ubiquitous use of URLs (Uniform Resource Locators) which precisely address an online resource (in the same manner that a telephone number is the address of a telephone).

The commission's assertion that "routing decisions are based on the architecture of the network" is true, and is true of telephone networks as well. In land-line networks the telephone number is assigned to a switch serving the user, and that switch is located by examining the telephone number dialed. Number portability ensures that a database is consulted during the routing of the number. Similarly, mobile networks locate the user's device by consulting a database in the network to find the current location of the handset, in order to route the call to the correct destination. This is no different than the process to locate a website the user wishes to contact — location information is retrieved and the network routes the request to the destination, whether by name, telephone number, URI or other means.

Importantly in mobile networks, or when calling an unknown phone number, the user has "no knowledge of the physical location" they are contacting, yet telephony is a telecommunications service.

Consumers do indeed want and pay for this location function the network provides, but these along are not sufficient to substantially change the entire service offered from a telecommunications service to an information service.

Importantly, traditional telephone does not require the user to carefully specify the path through their telephone provider's network, then through the called party's network in order to connect the call; it is unreasonable to assert the same function requires classification as an information service for broadband Internet, when it is present in existing telecommunications networks.

In response to paragraph 30

While providers may use a variety of technologies to transform the representation of user information, interwork networking technologies, or otherwise improve the functioning of their network, these do not constitute a change to the content or form of information the user is sending over the network.

The representation of data can take many shapes. The same speech can be represented by a video recording, solely the audio recording, or a textual transcript. Within computer systems each of these forms of representation may have many different technological encodings and representations. In nearly all cases neither of these (overall form, or technological encoding) will be changed during transmission to the other party, and in many cases the technology used will ensure that any modification is detected (as corrupt/erroneous) and rejected.

The kind of transformations the commission cites are one of several kinds:

"protocol-processing for interworking"

This is analogous to the kinds of processing done in existing telephone networks, especially between national and international networkings in the global telephone network. Telephone service is a telecommunications service and is subject to these transformations, so broadband service is not an information service solely because of these same category of transformations.

"change in content" by use of firewalls

WHile it is certainly a service provided to end users, and also operated for the benefit of the broadband provider (to mitigate attacks on the network itself), firewalls and other systems blocking traffic do not constitute sufficient transformation to mandate the service be classified as an information service. This can be seen as consistent with FCC rules regarding telephone companies being allowed, in fact encouraged, to supply their users with the ability to block unwanted calls (robocalls). Providing such a service does not undermine the nature of the service as a telecommunications service.

As as user of broadband Internet I do not want my provider to alter or restrict my user of the service in any way — and I am happy to release them of any liability as a result.

In response to paragraph 33

In the absence of better and more specific regulation or grants of authority from congress, the framework of Title II is appropriate as it allows the FCC to exercise appropriate discretion while maintaining the necessary protections (common carriage) to preserve the environment of open innovation in the face of the current oligopolies.

In response to paragraph 34

Indeed, the industry around the Internet has been largely unregulated, which has allowed massive societal change, economic growth, and general quality-of-life improvements. However, it should be noted that the innovations have all occurred "at the edge" — it is not ISPs that created YouTube, Facebook, or Netflix, it was not Verizon or AT&T that created Skype. The innovations that we have enjoyed are because the Internet was not constrained in its use, and these are precisely the sort of constraints that I strongly object to — the constraints that block innovation. Common carriage does not block innovation; instead it blocks protectionism. ISPs today do not have a history of innovation; at most they have incrementally improved their networks in order to provide more capacity, both for each individual user, and as demand has grown among all users. This is no small task, and is an important one; but it is infrastructure, the same as electricity, water, or telephone service — many of the concerns that apply to those apply equally well, yet some few may not. This is no reason to throw out the framework entirely.

In response to paragraph 35

I expect my broadband provider to be nothing more than a reasonably-priced transit provider. I do not want them to interfere with or manipulate my traffic through them in any way—if the Title II provisions are not enforced, I have no guarantee of that.

In response to paragraph 36

Whether an ISP offers supplementary services with the offering of a telecommunications service has no bearing on the telecommunications service itself. Telephone service does not become an information service because I subscribe to voicemail—only the voicemail service itself is classified differently.

I do not use, and would not separately pay for any such supplementary services. I am, in fact, likely unaware of the full extent of such services, as I do not have any interest in using them.

In response to paragraph 37

An ISP operating DNS resolvers and caching systems is engaged in both providing the telecommunications service (as an ISP) and some information services (the DNS resolver and caching). The DNS resolver provided is a convenience to their users, though an ISP need not provide this themselves. (Google, among other companies provides a DNS resolver for public use.) The ISP gains from providing a DNS resolver by limiting traffic across their network, as the resolver will cache queries (as part of the operation of DNS) allowing the traffic from their customers to be handled internally, without additional costs to the ISP. Some ISPs will also use their DNS resolvers to return search or advertising to users who attempt to access sites that do not exist. (This often provides little benefit to users, who would be better served by an accurate response that the site does not exist. Instead this provides benefit for the ISP, through advertising revenue.)

Caching is, again, primarily of benefit to the service provider, as it allows them to save infrastructure costs by reducing the required capacity. This may benefit end users in reduced cost of service, but the nature of the service is not vastly different.

Without ISPs provided DNS resolvers, the user's own equipment would need to perform that function. This is not unusual or difficult, and there are providers of DNS resolvers that are not ISPs a user could use as well.

In response to paragraph 39

While I believe the light-touch approach from the past 20 years is appropriate, I dot not agree that the FCC should tie its hands with respect to ensuring that open and unconstrained access to the Internet is preserved, and this specific regulator constraint (that of common carriage) requires the classification as a telecommunications service. It is important the rules are clear, both to industry and to consumers. Classification as a telecommunications service services to provide the necessary tools to provide that clear guidance, and does not require the FCC to stifle innovation or impose heavy-handed regulation—in fact, it is clear the FCC does not intend to do so.

In response to paragraph 40

I do not believe that mutually exclusive categories are the correct framework given the complex nature of modern telecommunications. The difficult to interpret cases of DNS resolvers and caching systems would benefit from a clear separation from the underlying service.

In response to paragraph 42

The peering and exchange of traffic between ISPs is, of course, a private arrangement. However, in as much as a service provider "offers to the public" the exchange of traffic, it is engaged in offering a telecommunications service, and must be subject to the same rules as any other.

This is an especially critical consideration in the case of providers who are "Tier 1"—those that do not pay another provider for transit to any portion of the Internet, but who maintain a peering agreement allowing them to reach every destination. These providers, in particular, have exceptionally powerful roles and as a matter of public interest must be encouraged (or regulated) to abstain from the abuse of that power.

In response to paragraph 48

The Commissions' claim that changing the classification again will alleviate regulatory uncertainty is specious—a single change introduces uncertainty, regardless. Multiple changes only increase, rather than alleviate the uncertainty.

In response to paragraph 50

"Do these isolated examples justify...?"

Yes, absolutely. In all cases the examples have demonstrated that the industry is unwilling to police itself, and that consumers are harmed. The markets are substantially failed, as there is a lack of competition, where competition exists the offerings are often not equivalent, and there are too few for a market to emerge. As well, high costs of entry prevent competition from developing.

"Is there evidence of harm?"

Yes. The ISP industry has record-breaking low customer satisfaction.

In response to paragraph 55

There is not substantive difference between mobile and fixed broadband service in terms of function. I cannot agree with the commission that a service offered to literally millions of end users (the public) is a private service.

In response to paragraph 60

The inconsistency is resolve also by continuing to classify as a telecommunications service, without any need to reconcile.

In response to paragraph 61

Given the vast technological changes happening in the telephony world, i ti snot unreasonable to conclude that the Internet and existing commercial mobile radio services are increasingly and completely interconnected.

In response to paragraph 73

Removing a "vague" set of guidelines will not promote certainty, but instead uncertainty. Firstly, among consumers, who will have to trust and understand what providers are selling individually, instead of relying on a standardized regulatory framework. Secondly, for service providers, removing guidance does not change what consumers expect of them, nor what behavior the commission may find acceptable or not—the set of guidelines increases certainty as it becomes more restrictive. You cannot have light-touch regulation withou significant uncertainty. (Due to ex post enforcement)

In response to paragraph 75

Without any rule, there can be only one outcomes—ISPs continuing to "innovate" by interposing themselves and rent-seeking. Common carriage is the appropriate standard.

In response to paragraph 77

Allowing companies to ignore their customer's protections if they are too small is a mistake; as is requiring burdensome regulation. Enforcement actions should be taken as needed, with a focus on large impact. I see no reason to modulate the protections offered.

In response to paragraph 79

On the surface, yes, an ISP can sufficiently curate and sell a non-telecommunications product. It would be extremely surprising to me if that product would be competitive with the usefulness and expense of un-curated Internet access. I cannot imagine such a service as broadly useful or demanded as the general telecommunications service of Internet access. I certainly would not buy such a service, whatever the price.

In response to paragraph 80

By codifying a no-blocking rule that ISPs already abide by, no burden is applied to the ISPs. The rule cannot be burdensome unless the commissions chooses to make it so.

In response to paragraph 81

The rule is sensible and meets the expectations of consumers buying a telecommunications service.

In response to paragraph 82

Common carriage is the appropriate framework, the reclassification proposed is in error.

In response to paragraph 83

The prohibition on throttling of selected traffic benefits consumers mainly through the innovation that is enabled—an equal playing field enables easy entrance of competitive offerings and new uses that cannot be prediction.

Differentiated prioritization may have specific applications where it provides a higher quality of service, but it must be under the control of the consumer, not the ISP. This is critical to the adequate performance of latency-sensitive applications; but the consumer must be able to choose which traffic is prioritized and how that occurs. The simplest solution is to prohibit ISPs from doing so at all, and require that the consumer's equipment do any prioritization.

Preventing differentiation will increase competitive pressures and is a net positive for consumers.

In response to paragraph 89

As a functioning market requires consumers to have all relevant information available to them, transparency is a requirement.

In response to paragraph 90

The marketplace is not competitive. Transparency and reporting requirements help consumers apply pressure to the few ISPs available to them in order to get the service(s) they want.

In response to paragraph 93

The ideal case is that no exception is allowed to the bright-line rules. ISPs already have safe harbor protection from liability for unlawful content. Giving them the authority to determine such and prevent transmissions of it grants significant privilege to censor the network. "Reasonable network management" is again designed to give ISPs broad latitude to censor the contents of the network. They should be held to the high standards of common carriage. Only specific network management practices should be allowed, but regulation and rule-making can hardly keep up with technology so no other recourse than case-by-case evaluation is possible.

In response to paragraph 94

Protecting the innovation space represented by the over-the-top offerings is extremely important to maintaining the innovation driven by the Internet.

In response to paragraph 95

Mobile and fixed broadband are not different in the service they provide so no fundamental differences should be present in the regulation of the services.

Building a Better Terminal


First, let's start with a bit of background.

I like using a terminal / console. The direct interaction and expressiveness that a command line gives me is very powerful, and efficient. I'm writing this post in vim, using restructured text. I do all my file management in a console window, directly in the shell.

So when I think about what a 'better' terminal would be, I don't mean to replace it with something that fundamentally changes the interaction model. For some, a better terminal would be more like a notebook, with rerunable, editable commands, able to embed images, links, and videos. Interactable with a mouse. That escapes the bounds of what I believe to be generally useful as a replacement for existing terminals. (The IPython Notebook is a very good example of something that is 'like' a terminal, or interactive console, but serves a different purpose and audience.)

All that being said, there are definitely limitations of current terminal emulators that I believe are worth addressing. They fall into a couple of broad categories, with some basic requirements:

Many of these requirements are born out of the usage of a modern terminal emulator and its environment, given the differences between the current usage and the historical context in which the terminals were developed.

A modern terminal emulator is a piece of software responsible for interacting with the I/O system in some fashion (either through a graphical environment like X, through a kernel primitive like Linux's CONFIG_VT, etc.) and interacting in a specified way with software run on that terminal. It functions as a basic API layer for certain kinds of software.

This is different than the historical usage, which was that a terminal was a device attached to a computer (generally over a serial line) that provided access to the computer, generally a multi-user machine. As such, the I/O system was only able to communicate in-band over a stream of bytes, generally, and many of the limitations of current terminal emulators are a direct result of this arrangement.

Even in the case of a remote terminal, we now have much higher bandwidth in most cases, so a more robust terminal protocol, between software and the I/O system is desirable. Additionally, given no compatibility constraints, we can define an out-of-band mechanism for control of the terminal, rather than needing to rely on a complex and ambiguous stream of bytes for all communications.

I don't have a clear idea of a design, I am only outlining what I feel are the requirements of a 'better' terminal.

Thanks for reading.

6 Word Stories

Enimihil Greg S.
#6words #creative #fiction #short #story








After the Fall

You missed the Rapture. Good luck.

Alarm will sound...

Nobody's home. Knock, then run fast.

How to boot a Ubiquiti RouterStation Pro from RedBoot over tftp

#embedded #linux #openwrt #redboot #router #routerstationpro #technology

Just a quick note, more to myself than anyone else, but I wasn't able to easily find this information online, trolling through the OpenWRT and Ubiquiti forums.

I wanted to try booting different kernels on the device before committing to flashing, so that all I had to do was reboot the board and everything would come back up as normal. It seems obvious that this should be possible, and with a bit of digging around in RedBoot docs, trial and error, and a few lucky guesses I was able to figure out the incantation to make RedBoot load an uncompressed linux kernel (ELF) and boot it:

load -m tftp -h openwrt-ar71xx-generic-vmlinux.elf
exec -c "board=UBNT-RSPRO panic=1"

Note that this does not change the root filesystem, so if your modules aren't compatible, this won't work, but I suspect if you use an embedded initrd you can run the system entirely out of RAM without touching flash.

Traditions in Computing

Last Updated
#essay #opinion #superstitions #technology

This essay is in reaction to the essay Programmer Superstitions by Jesse Liberty (from 2006)

Technological Traditions

From the outset, our usage of computers and information systems is wholly formed by the designers of those systems. What is, in fact, somewhat remarkable is the unity with which those designs are created, across oceans, cultures, and decades. Some are "obvious," in their correctness, others less so. Many things are done in a particular manner simply because it was the first or simplest way that came to the mind of the designer of the system. Many other aspects of technology are designed to fit the general notions of how that kind of technology — a certain kind of software — is expected to behave. These traditions are mostly nascent and often the learned lore of the industry; few formal instructional programs exist with the intent to teach them.

As technologists, we must be careful to evaluate those traditions in light of the current context; misapplying them can be more damaging than simply ignoring them, as expectations will be set when a particular tradition is invoked, whether by behavior or interface.

At the heart of these traditions are technologists, geeks, hackers, and other highly technical professions — these people have created, embraced, rejected, revised, and updated the various traditions we see today.

Programmer Superstitions

Programmer superstitions are among the most entrenched, whether simply due to a sort of intellectual laziness or something more sinister of these. Programmer Superstitions raises a few points. Most of the points are considered in a vacuum, and in a single-platform, nearly single-language manner. Some examples seem to ignore any technological ecosystem other than the author's native one; this one of the purposes of calling out superstitions, to break out of the universe where we are so entrenched.

Case Sensitivity

I have a gripe with the author's arguments advocating the case-insensitivity of programming languages. He states that the sensitivity of programming languages to the case of the symbols is historical baggage from C that should not be accepted blindly. Not blindly sure, but case-sensitivity was chosen for a reason. At the very least, programmers expect most languages to be case-sensitive — an historical argument to be sure, but the expectation is case-insensitivity. Being an exception is more likely to cause problems.

More importantly, in my mind, is filesystems, command shells, and nearly every other system in a computer is case-sensitive. At least on my computer. Windows filesystems are case-insensitive, but case-preserving, as is HFS on a Mac. This is ostensibly fine, but is tricky to deal with programmatically — program code cannot ignore case in data. The exceptions to default case-sensitivity are, however, very common (e.g. search).

Orthogonality is an important benefit, but could be sacrificed were case-insensitivity a demonstrable win. I have never had a case-sensitive programming language cause undue headache that would have been avoided by ignoring case. Sure, a typos happen, but I am a competent typist and such problems are not so difficult to solve as some may argue. The disadvantage is large: having distinct appearances with the same meaning. Programming languages are one of the most precise ways to express ideas, and the imprecision allowed by case-insensitivity seems additionally burdonsome on an already complex problem.

One other point to make, a social one, is programmers are heavily indoctrinated to preserve data. Throwing away informatin about case is like throwing away the most significant bit of a number. Allowing a variable name, or language keyword to be spelled in a variety of ways allows variations in programming style that can interfere with version control and refactoring tools. Some sort of normalized case is probably used preferentially to any of the combinations of cases allowed. Being able to change nearly every character of a program and have it be functionally identical does not seem a desirable attribute in a programming language.

For more arguments (both ways) on the issue, see this Lambda the Ultimate thread.

Iteration Variables

Another point, that iteration variables in counted loops (like for loops) are often labeled according the mathematical convention of using i, j, k, ..., yet does not make a case for any other convention. He suggests that a and b would "make much more sense" (emphasis added) which is not obvious nor defended. Certainly having an established convention in a related discipline is a sufficient reason? There is no technical reason to call iteration variables any particular name — but many programmers will be familiar with the mathematical custom, especially those with a formal education, so the idiom will be natural and provide meaning where otherwise there would be arbitrariness and individual variation. I don't mean to imply that variables of iteration should always be named in such a mathematical fashion, only when the mathematical analogy is useful. Naming a variable index, count, or any other descriptive name is, of course, ideal in a number of circumstances, especially in deeply nested and complex structures. The simple iteration names are useful for inner loops that have generally uncomplicated structure.

Data Hiding

I do agree with the author that the "object-oriented" principle of data hiding is usually useless and often a noisy and inflexible idiom; though I suspect the author is less concerned with the utility of data hiding than with the discomfort he feels at the lack of it when programming.

The Python language does not have a mechanism to enfore data-hiding, but is a fully capable object-oriented language. (The C# language lacks binary compatibility between properties and object fields, where python does not, so there are technical reasons in some languages for the use of accessors.)

I find I never do things in code that would be useful for the compiler or runtime to have prevented by enforcing data-hiding. Any time I break encapsulation, I leave a big #XXX: hack or similar comment to warn that the design needs to change, or that the code needs refactoring.

Being able to do that (rather than having to do the refactoring at that moment) allows for increased productivity, at the expense of some overhead when the refactoring needs to be done en masse.