Factomize Core Dev Update 5 — Factomize

Who
4 min readApr 14, 2019

--

Short Term Review

One of my worries with my approach to learning the codebase (see previous blog posts one and two) was that I might not be able to transition well from a theoretical reading and analyzation of the code to being able to write new code, which if you’re familiar with development, are two very separate concepts. It hasn’t been seamless but I’ve had some time to reflect since then and I would consider it to have worked pretty well.

That said, I haven’t talked the other core devs about how they have been faring. It would be nice to hear from others about this topic.

Update on my status

For a while I worked on odds and ends, the result of which you can see in my pull requests, which was fun but after a while, I got the itch to do something more. I wanted to take an area that I saw can be improved on and rewrite it: the underlying P2P network.

What’s wrong with it?

There’s nothing wrong with the current P2P implementation in the sense that it’s not working. The problems I see are that it has a confusing architecture, which I stumbled over more than once in my analysis. It uses the same channels for sending parcels and commands, the connections and the peers are maintained separately, and peer management in general is very limited.

The code is also very much tied into itself, making it harder to expand for future features or successfully implement current features, like the peers.json file to track peers across reboots. The code for that exists (and currently is in write-only mode to create the file) but the ability to filter the desired peers to add to it has proven difficult.

How is it being reworked?

The P2P code is fairly isolated from the rest of factomd, with two interfaces: the input/output used by the rest of the code and the packets sent to the other servers. Everything between that, from peer management to routing, is plug-and-play. I deleted all of the in-between and am in the process of rewriting it from scratch. I have bitten off quite a large chunk but I think the end result will be worth it.

The goal is to have code that is easily understandable, with clearly separated program logic, and a modular approach that will make future expansion easier. I also hope to maintain backward compatibility with the existing network.

Progress

I opted to start a new repository for the new implementation to make testing it easier. This also gives us the option of releasing the code as a completely standalone implementation of a gossip network implementation that can be integrated into other, non-factom projects as part of the general open-source philosophy. At the time this blog is published, the network will boot up and peers will connect to each other in a fashion similar to the live factomd network but it’s still undergoing dramatic changes.

One of the biggest challenges is actually testing a network using a very limited developer setup. I didn’t want to write an abstracted network code using golang’s pipes but actually utilize the tcp/ip system. The solution I came up with is using the reserved loopback addresses: 127.0.0.0/8. Nodes are able to configure which interface to bind to and it’s easy to launch dozens to hundreds of instances of the network using loopback addresses.

I also publish my test code for anyone interested but this will likely remain poorly documented. Currently, it starts a seed-server and 51 network instances that connect to each other on a randomized delay basis. They share peers with each other and ping to keep the connection alive.

Bonus

I’ll have more updates regarding the new structure in the coming weeks as that is finalized. For now, I am able to provide a closer answer to a question that I couldn’t answer in my previous blog: What does the factom network actually look like?

This is a rough approximation of the real factom network layout, implemented using my own network code. It’s not a perfect replica but rather a “what would the network look like under ideal conditions?” I have been able to confirm this via data gathered from MainNet that most nodes will connect to the nodes in the seed file and stay connected, creating a sort of hub in the center. Seed nodes and their connections are colored red, green, and blue. The arrows indicate the direction that was dialed.

At the moment, the seed nodes are critical for the infrastructure and are under much heavier load than normal nodes as close to all nodes will be connected to them. This is one of the challenging aspects of ad-hoc gossip networks and I will explore several strategies to attempt to re-organize this layout. More on that in future blogs.

This approximation uses 51 nodes with 3 seed nodes and each node wanting to connect to 6 nodes total. MainNet’s configuration is around 125 nodes with 10 seed nodes and each node wanting to connect to 32 nodes total. I reduced the number of connections per node to increase visibility.

Originally published at https://factomize.com on April 14, 2019.

--

--

No responses yet