a blob lives here

Tried it for four hours. It was okay. Not sure that I regret the experience but don't really feel that attached to it.

I liked the character creator and the clothing system. Progression feels very slow and the game seems very grindy. And it doesn't really feel like you are grinding towards anything.

Maybe worth another try when they finish the game.

  • Documentation is very sparse
  • JHBuild wants an absolute path to the moduleset file
  • There can seemingly only be one moduleset file?
  • It keeps trying to build in a separate build dir but without referencing the original makefile
  • Telling it that it can't build in a separate dir just results in it trying to use nonexistent build folders

I gave up after about half an hour. It looked like something I would like to use but it's trying to do some archaic Gnome project specific stuff and I don't care to decrypt what its doing wrong.

I did a little experiment using please.build to build various audio plugins for Linux. These tend to have particular requirements like doing git checkouts and possibly also checking out sub-modules, or downloading release tarballs, possibly applying patches so old plugins will build, dealing with dependencies with mixed versions that different plugins use, and they all have non-standard build or bundling requirements.

please worked fine for this. Although my build scripts are a bit stupid—they just download the exact version of things a plugin wants and use a shim blank script to do nothing with them. Then we have a job that makes sure to move them wherever the plugin vendored it. And we build packages by just calling whatever build system that plugin natively uses.

Something to note is that being a Bazel clone it really wants to be the one performing the build steps as well. So one feature we have to short circuit is one that kills stuck builds after a timeout. Sometimes running a build can take more than ten minutes and please will see this as a stuck job and kill it. Other than that it actually works pretty well.

Note though that I am not really playing in to the strengths of a Bazel system with this. It's basically just used to coordinate downloading a bunch of files and compiling them normally. A “proper” build script would take over the build jobs from software like Meson and CMake so we were never wasting any time with CPU idleness or could even use the remote job tools to scatter build jobs across a datacenter.

I may play around with Gnome's JHBuild next. It's closer to what I am actually doing here and it's worth taking a short peek. Another alternative I'm looking at is building plugins via Flatpak.

The unit of a single thought form (the engram) is, basically, a large pool of toggles where only some small number of them are active at a given time.


Google's new generalist agent is basically a study in using large linear maps to translate a given input (posed as one of these engrams, synthesized) to an output (another engram, synthesized.) There is nothing new about “autoencoders” but now one has been used to deliberately “solve” a number of distinct problems using just one bigger codebook.

Doc2Vec have looked in to how engrams encoding the meaning of words can be determined by trying to force one to predict output words and documents. They then just take the middle layer and use it for other shenanigans like seeing how similar two documents are.

Butterfly Transforms have examined how linear mapping not only solves a multitude of real world complex math problems but how they can be rearranged in a particular way to use the same model with different parameters to efficiently do it.

Numenta's cortical “spatial pooling” and sparse distributed representations attempt to model memory from studying actual brains of living systems. The cortical machinery has a very complex repeating circuit but most of that circuitry seems to exist to conditionally fire or inhibit outputs which, when viewed from far away, create a kind of huge bitmap of which parts of the machinery are active given some input.

Cortical's “Semantic Folding Theory” tries to immitate what Word2Vec and Doc models do but for Numenta's cortical format. While Google models just sort of accidentally do it while trying to learn to immitate outputs from inputs the Cortical one uses some deliberate steps to create the bitmap from a system of input words. I've privately replicated at least the creation of the initial maps but haven't done much more with these yet.

It's interesting that the hip neural network school and the more classical AI approaches are converging on the same ideas of how individual thought forms are held. It seems that conjecture was right in that thoughts are stored in a large format where they are defined solely in relation to one another. There are also other inevitable and sometimes hilarious problems like the taste of an apple being similar to a color. You wouldn't think it makes any sense but some number of ons and offs have to go somewhere and the distance between those patterns has to exist because of it.

Personally I'm looking in to Numenta and classical Hidden Markov models still. Google has impressive results but they are still reliant on piling an unreasonable amount of compute behind everything. Numenta and Hidden Markovs are capable of real-time learning.

I made a post about an “infrastructure as code” tool but it uses RDF Triplets as the configuration format. This seems cursed... but maybe not.

Infrastructure as code tools basically:

  • Represent the current world state as a graph.
  • Read the desired world state as a graph.
  • Find the difference between world states and write a change tape.
  • Transform or hand edit the tape <– most don't let you do this part.
  • Evaluate the tape to commit the changes.

In Hashicorp's config language the computer cannot really detect you moving a user from Auth0 to Octa—it interprets this as the Octa user being destroyed and another object being created at Auth0. The syntax is nice to type and look at but it lacks certain tags that simplify computing differences. Also Terraform doesn't let you edit the tape.

RDF triplets do have unique names for individuals. So we always have an anchor for graph nodes and we can easily tell if they still exist in both ontologies. We only need to look for the edges where attributes live. So it would actually register to tape as change: provider octa -> auth0 which could then be processed with a special migration step or degenerate to something else like register: auth0 followed by deregister: octa.

That's still not necessarily simple. If you change a password and also migrate to a new service the tape would register a new password to a service that you then immediately delete the account on. So there's still derpiness.

Manually editing an ontology is not a lot of fun though. There are tools for it but you could argue there can be tools made for anything.

So maybe what we would be doing is embracing the Unix philosophy a bit. Having a tool that's whole job is just diffing two ontologies and giving you the change tape. And then other tools that process the tape just like old Unix typesetters like roff were used. Admins can also edit the tapes by hand or write their own scripts to modify it. That lets them enforce rules like creating all new service accounts, then waiting for manual approval, then refactoring the blue group, waiting for approval, then refactoring the rest, than deleting the old accounts, etc.

You'd probably have to bring your own ontology editor though. Nobody wants to write this crap by hand:

:iceworks rdf:type owl:NamedIndividual , :VPS ; :HostedAt :Datacenter01 .
[ rdf:type owl:Axiom ; owl:annotatedSource :iceworks ; owl:annotatedTarget :Datacenter01 ; owl:annotatedTarget :HotedAt ; rdfs:comment "screaming inside" ] .

tl;dr Bitcoin pretending to be Ripple for a while.

  • Payment Channels: when parties put funds in an escrow and agree to keep the books themselves rather than on the public ledger.
  • Payment Channel Networks: when people are trading money that only exists inside of the payment channels and don't actually exist on the public ledger.

At some point the internal ledger of a payment channel has to be reckoned. This moves the money in appropriate amounts to the parties that are supposed to have it while sparing the blockchain all of the intermediary exchanges that happened.

So a player can stake 100$ with Valve (hypothetically) and when they buy Steam games there is no actual transaction but just an agreement that when the transaction is resolved Valve will keep whatever amount the player has given in exchange for game subscriptions. At the end of a month the amounts are doled out to each party and as far as Bitcoin is concerned only a single transaction has occurred. Meaning those individual transactions have zero transaction fees and no validation overhead.

Entire networks of these arrangements can be made which basically means subsets of the network are working as Ripple does with some form of mutual trust network in place. The mutual trust networks at some point are reckoned with the blockchain to actually move the wealth.

A gross summarization of how Holochain works.

Holochain is a blockchain system that does not rely on proof of work (Bitcoin), proof of stake (Loki, NXT), or central trust delegation (Ripple.) It works on a kind of “trust but verify” system where people run their own software but the software produces a log which can be audited by other users of the same software. Any user can clown their own ledgers but everyone else is capable of detecting the tampering and can choose whether to exile that user.

Data

Each user-program pair has their own public blockchain. They can write new entries in to this at any time. After appending an entry that new piece is distributed to other users on the network via a large distributed hash table. Public-private keys are used so only the owner of the user-program pair can write new events for their chain. Custodial holding of public fragments in the hash table prevents the owner of the keys from modifying their own history and minting a new blockchain undetected.

Private data can be put in to blocks . These blocks are then referenced (via content hashes) in the public chain. This allows you to keep some secret like predicting who the next president will be. The public chain certifies the prediction happened and once the private blocks are published can testify this is the prediction you made and told everyone about.

Thus the only public information is whatever an application publishes. Private state can be maintained indefinitely or revealed at a later date for auditing.

The Program

Users interact with a program in some way. The program then implements the rules for signalling, verifying data, and issuing updates. Other users in the network who use the same version of the program also are able to verify that each other are following the rules.

A summarization of proposed social networking reforms.

  • Censorship: Controlling access to possibly controversial ideas.
  • Moderation: Controlling access to bad actors.
  • Bad actors: trolls, shills, scammers.
  • Trolls: Makes bad faith posts.
  • Shills: Executes a script to target particular communities. Disrupts conversations to inject assigned narratives.

The proposed concept is that when you post a reply to another post anyone who follows you can see the reply and a link to the original message being replied to (visibility upstream.) Other people's replies are only visible if there is approval (visibility downstream.)

  • If you follow the person making the reply then the reply is approved.
  • If the poster approves of the specific reply then the reply is approved.
  • If the poster has issued blanket approval to someone (ex. a mutual follower) then their replies area automatically approved.
  • If someone else you follow has seen the reply and approves it then the reply is approved (for you.)

Thus conversations take the form of a kind of whisper network where public posts can be shouted in to the void and anyone can respond but a limited collection of people can see the replies until someone is willing to vouch for either the poster or the specific post.

Citizen Sleeper is a game about playing as a robot who escaped from big capitalism. The company in question likes to put copies of people's minds in to these robots that are programmed to fail unless they sate their drug addictions (which come in the form of patented drugs) or else their artificial bodies decay. You escape and end up on an ancap space station and from there basically have to find and do odd jobs to buy your food and drugs to stick around.

Each “day” you get a number of D6 rolled based on how well maintained your body is. You then perform actions by dragging the dice in to slots which tells you how likely to succeed or be punished you are. There is also a hacking mechanic that requires putting the exact correct die in to it to progress.

The die mechanic with hacking can mess up your time table. You have to go check if its a good day to go hacking (if you've got the exact die you need) and it's the same for each task available so you may not be able to do any computer tasks that day. Hacking doesn't really play as big of a role as you would think though. It's needed for a particular ending to the game and there's a couple minor bonuses for it. But I neglected the matrix and just did other things.

It turns out one of the best ways to get money in the game if you are an intelligence character is to just play the stock market (it's an action in the game so there's no RL analysis involved) until you unlock the area which allows you to scam people at cards. Then you just scam people at cards and comfortably afford your drugs and food for the rest of the game.

There were genuinely sad moments in the game. Other friends who were hinted at that didn't make it (and you only know about some of these things by doing optional stuff instead of what you were supposed to be doing) and finding other robots who end up getting shot because of reasons. Choices aren't really all that meaningful. Optional side stuff just gives you access to lore that you can't do anything with in-game. There are a handful of walls put up that require you to dump dice in to them day after day.

There are various hard timers and events that run on cycles. It's not a big problem though. As long as you understand that action dice are needed to do anything and you keep your body condition maxed you don't really have a shortage of actions. And there is a perk you can buy that allows rerolling the die once per day. So if you get some trashy rolls you just spend the good ones, reroll whats left, and its probably fine. The hard timer for the story can actually be cancelled through story events as well. Although some story events make this harder (by completing one quest line you actually end up cutting off a black market contact so you can no longer buy drugs from them.)

Overall I liked it. I played it in one sitting for around 6-8 hours before getting some endings and binning it. Although you're never much more to anyone than a robot that is really useful for difficult tasks. And nobody ever asks what your name is.

Older than this blog itself is my Zettlkasten called /z/. That particular section of the site is also accessible as a Gemini capsule partly because it was capricious to do so. I don't believe anyone actually reads it even though it's probably one of the largest collections of Gemini pages around.

A zettelkasten is a kind of “slip box” based around a German academic who refused the idea of collecting research papers for one project and throwing them all out. Instead notes about papers are placed on to 3x5 index cards and kept in a box to be reviewed again at random times. There are some pretty decent commercial systems for this but you can also just do it with note cards in physical form or a pile of files. Mac users used to have a nice one which inspired clones for other systems.

/z/ is built out of a couple of scripts i put together and shared with a few people. there are various systems that you would probably like better especially if supporting Gemini is not a requirement. For example Emacs has a nice system based on org-mode, similarly VS Code has a similar plugin and completely standalone options also exist. Of special mention of course is that in Flancia there is an Agora.

Ideally a slipbox in digital format makes it easy to randomly review old cards again (maybe they suddenly become inspiring later on.) Also important is being able to find cards that become important again. Maybe there was a study that cotton candy causes instant death and six months later you need to slap someone in the face with it. Ah, but you can't find it again and are stuck trying to do web searches after the fact. If its properly filed, you can find it again, and now you are a genius instead of a crazy person.

Structure

The basic rule is that when you read some article and think it might be useful to review or cite again in the future you go and write down the particular facts of interest on the card. This should reference the original document and maybe even a cached copy if you keep those. These are “literary cards” which represent a token of knowledge about a single piece of literature. Literary Cards are going to stack up in your slip box pretty tall so you will eventually need to start contextualizing them in reference cards (“hubs.”)

On /z/ the hub cards point to literary cards and have some name referencing the particular importance of that card to the hub. A hub talking about a particular virus might have links to studies about aspects of the virus and so the link will have a name like, “Lab reports the bimbo disease turns one in five infected in to horny but dumb sluts” and that will link to the literary card with the particular claims of interest for that paper written in it and link to the original source from there if needed.

If you are familiar with the somewhat failed “Semantic Web” idea then its similar. A particular article has a list of claims and proofs. We make a reference to those claims and proofs which link back to the article. And then we make reference cards that cite the particular claims. Hopefully everything traces back to a source.

Classical Trivium

If you get very bored, Logos Media has a very long podcast series about the classical trivium and its importance but the very summarized version is that classical education is about three things:

  • Grammar: identifying the symbols for a given topic
  • Logic: relating symbols to one another and resolving conflicts
  • Rhetoric rewriting the resolved conflicts in to a narrative

This process moves forward in a loop and the narrative becomes a symbol for more complex reasoning and so forth.

Hubs of Hubs

Depending on how you structure your zettelkasten it can be helpful to explain articles to yourself so when you review the cards in the future you don't have to re-read the original source material. And once you have worked out what the paper is saying the reference card forms the rhetoric for a collection of individual articles.

I have not done hubs of hubs but I suspect it's possible they could become necessary at some point. For example a collection of markov chain articles necessitates a hub for markov-based learning systems, which in turn belongs to a collection of machine learning, and so forth, but a zettelkasten is a personal thing and you don't particularly need hubs other than to help you find particular citations again in the future.