In order to design a networking protocol which scales, we have to understand the core theoretical primitives with which a network of networks assembles itself.
Every graph is built from nodes and edges. The concept of an edge without nodes is absurd, and so it's easy enough to recognize that a node is the core primitive.
Thusly, we can define the most basic type of network:
At this point, we can define a couple terms which will be used to describe network configurations:
- A fully connected network is a communication network in which each of the nodes is connected to each other. In graph theory it known as a complete graph. A graph consisting of a single node is considered complete.
- A relay network is a graph consisting of some number of nodes (at least 3) in which at least two of the component nodes are not directly connected. By these definitions, complete graphs cannot be considered relay networks, and vice versa.
We can begin to reason with the concept of distance and say that a node's distance from itself can safely be assumed to be zero. By this measure we can rephrase the above definitions:
A fully connected network is one in which the distance between any two nodes is not greater than one.
A relay network is one in which the distance between at least one pair of nodes is greater than one.
An island has no need for addressing, routing, or any other concerns. As such, it should be fairly obvious why we must go on to define the next type of network:
When two nodes form some connection, more complicated issues start to come into play.
Nodes must each have a functionally unique address to be able to decide which packets are meant to be internal, and which are meant to be external.
It must be decided whether a node should be able to maintain multiple addresses, or if they should be limited to just one.
The connection between two nodes, called an edge, should be represented distinctly from the physical means used to transmit packages. This allows the node to successfully switch between different transmission media:
- hard links
As well as tolerate delays in transmission that would cause errors in lower-level communication protocols.
A pair is still considered fully connected, and as such, not a valid relay network.
It is interesting to note at this point that neither an island or a pair can be seen as orientable, a concept that will soon become very important.
As soon as a third node joins in, things get interesting. The third node has the option of connecting to either of the previous nodes, or to both of them. I'll deal with the latter case soon. In the former cases, the network takes on a chain heirarchy. It is the simplest possible instance of a relay network.
At this point, the network finally becomes orientable. Even if we assume that the first two nodes in the network are otherwise identical (aside from their unique addresses), the fact that one of these nodes will have formed a connection with one over the other creates a heirarchy in the network.
We can formalize the definition of this heirarchy by examining the distance matrix. The sum of any node's row in the table is an effective measure of its centrality. A lower value indicates a more privileged position. Nodes which rely entirely on one such node to relay messages for them are known as leaf nodes.
The central node in this network bears the responsibility of relaying information between the other two nodes. Even if we assume that some perfect encryption scheme is being employed, then the central node is still able to decide (arbitrarily) which packets it will forward, and which it will deny.
In recognition of the responsibility of its position, such a node is able to refuse to connect to leaf nodes unless their address is satisfactorily similar to their own address. By leveraging its position, burgeoning hubs incentivize a stigmergic process of address aggregation. Nodes in a similar keyspace are not necessarily in the same logical space, though, this behaviour will increase the possibility that they will be.
Furthermore, it places the burden of address aggregation on nodes which are otherwise not contributing to the routing table. In such a situation, nodes would be able to connect. The burdened node would be informed of the strictness of its criteria for routing privileges. It would then continually regenerate addresses until one sufficiently close in keyspace were generated. Upon assuming this address as its identity, it would be able to start sending packets to the rest of the network.
larsg points out that this would cause problems when _roaming_. This could potentially be solved by having the original **patron** node vouch for the roaming node. Thusly, that node could store information about the node, and speak on its behalf. When another node means to connect, it would be able to find the node's home neighbourhood, and from there would receive information about where they had gone. It is assumed that this would be made easier by the roaming node _phoning home_ to inform them of its location periodically. Whether this would be permitted would, of course, be up to the node offering its services to the roaming node.
Before going any further, it's worthwhile to point out that the network shown above can also be viewed as a tree.
The point to take away from this is that any network can be viewed as a tree. As such, it's more of an abstract network topology useful for representing the network as viewed from one particular node's perspective.
Now we can return to the topology mentioned earlier. When the third node joins the network, it can potentially form edges with both of the other nodes. Such a configuration is called a ring topology.
A ring is notable for the fact that the sum of any node's distances from the rest of the ring is equivalent. From any one node's perspective, the distance matrix is equivalent to that of an identically sized chain topology.
In graph theory, a ring is known as a cycle. Cycles are one of the core primitives that allow nodes to build effective paths to the rest of the network.
Cycles also serve to keep the network honest. Rather than relying on nodes to accurately report their known routes, adjacent nodes affect their peers reputation when they discover them to have misrepresented themselves.
This would open up the network to attack if nodes could simply tell adjacent nodes that they had a route to the offending node (C) without having them confirm it. As such, there must be a routine for confirmation. A is peered with C and B. C has told A it is a leaf node. A asks B for a list of peers. B responds to A with an array containing C. A informs B that C has declared itself a leaf node. B momentarily stops routing over the connection 'bc'. B sends an encrypted message to C through A. If C responds to B through A, it is confirmed that A is an effective path. How to confirm that C has advertised itself as a leaf node? It isn't necessary. A can simply opt not to route for C. B resumes routing for C, unless it trusts A that B is malicious.
This process relies on trust. Nodes must each be configurable as being fundamentally trusting or untrusting. This defines their default behaviour. In either case, a whitelest/blacklist can be employed to override the default behaviour.
Obviously, in a potentially malicious network, it is safer to be untrusting. It is assumed, however, that a number of nodes can be controlled by a single owner, and that those nodes can be relied on not to make false reports.
Furthermore, a node can also be configured to allow such behaviour from their peers. For instance, if the example node C belonged to the same owner as A, it might be desirable for A to assume the task of routing. Perhaps C is a mobile device with limited throughput, while A might have unlimited bandwidth. Whatever the situation, it must be understood that one size does not fit all
Any node with more than two peers is considered a hub.
Hubs drastically reduce the average number of hops between any two point in the network. If it is assumed that nodes are to possess routing tables of equal size, then a hub will necessarily focus on breadth, rather than depth. This is important to consider when choosing a search algorithm.
They are a requirement of a small world network, and generally act as a confluence of a number of other complex topologies.
It may be desirable for a hub to act differently depending on the topology of its neighbours.
There are two kinds of mesh topologies which each elicit very different behaviour. One can be either fully connected, in which case no two nodes are separated by more than a single hop, or they can be partially connected. Partially connected mesh networks do not have any unique properties in comparison to the other network topologies, rather, they are best conceived as being composed of other topologies.
This graph's distance matrix
Some other statistics
The first row, PEERS, is a count of how many peers that node has.
The second, ROLE, divides every node into one of several categories: leaf, relay, or hub.
The third, CENTRALITY, is the sum of the node's row in the distance matrix, divided by the number of nodes in the table (AKA, its mean distance). This measure includes its distance from itself.
Most of this article has been theoretical so far. While topology has a great effect on the behaviour of the network, we can't get a clear idea of how things will work unless we understand the process of its creation.
The proposal to leverage a node's position in the negotiation of keyspace has different results depending on the order in the edges of the graph were formed. As such, the division of neighbourhoods can hint at the network's history. This leaves plenty of room for coincidence, though, and as such is not a reliable metric for post-mortem analysis.
Given the resulting network, we can imagine a number of possible histories.
A ring and a hub
Let's suppose that the node b first joined with a, forming a pair. When c peers with b, b realizes it is about to become a relay. As such, it leverages this responsibility, and demands that c finds a sufficiently similar address to its own.
As d, e, and h ask to peer as well, their respective addresses differ in relation to each other, but since they each have patterns in common with b, they are more likely than average to have patterns in common with each other as well.
Meanwhile, a chain has been forming. The process begins in a similar fashion, with j acting as a relay between i and k. Each successive member of this neighbourhood adds an additional peer in sequence, forming an increasingly long chain. With each additional member, the keyspace takes on the appearance of a spectrum. Each member is slightly less likely to have features in common with the original, but any relay should have a fairly similar key to its direct neighbours. This process continues with l, m, and n lengthening the chain until finally o asks i to peer.
This link is different from the rest in the network, since i stands to benefit significantly from the arrangement. The distance between i and n drops from five
i>j>k>l>m>n to two
i>o>n. In fact, every member of the ring benefits:
- Nodes towards the edges, name i and o are significantly less marginalized by their position.
- The nodes towards the center, k, l, and m end up having to route much less traffic as well.
- Furthermore, the network as a whole becomes more resilient, with each node having a backup path in case any one member of the ring experiences failure.
It is reasonable for i to accept this offer without negotiating with o to assume a similar key. The ring ends up with a distinct disruption in its gradual fade from one keyspace to another, barring some coincidence in which i and o just happened to have generated similar keys independently.
Depending on whether g peered with f before or after f peered with e, it may end up generating its own key independently, or assimilating the key of its more privileged neighbour.
Finally, by the time the links
i>o are formed, none of these nodes are considered leaves. As such, they act as border nodes, which serve as indicators of where a neighbourhood begins and ends.
The result of this process is that any one node can afford to have a reasonably small routing table in relation to the size of the network. When g wants to reach m, each member of the chain
g>f>e knows the message can only go one way. The hub b is should be able to tell that i is a likely bet (barring some coincidences which will resolve themselves anyway). The ring's entry point, i, is then able to send its packets along one of two paths, though it will much likely prefer to send it along the chain
j>k>l>m instead of
i>o>n>m, given the keyspace resemblance.
There are other options for deciding how keys are negotiated, but ultimately these cannot be enforced by the network, only by the nodes themselves. Perhaps it would be better for pairs (as a precursor to a chain) to negotiate similar keys mutually. They could begin with an introductory process to bootstrap their connection, then generate an increasingly large batch of keys until a similarity was found between any two matches. The total time to make the comparisons would grow exponentially, however, as the legacy of comparisons could be static, the computational time would have a linear increase in complexity between increments. The benefit to the network's routing could be significantly improved using this method.
The assumption all along has been that static IP addresses are better than the alternative, dynamic IPs. We may need to challenge this assumption. A static IP is your pseudonym. When you visit some site, it is your identity. If you are hosting content, it is the address associated with your services. If you're sick of NAT, then you probably get a knee jerk reaction to the prospect of using an IP that periodically changes, but let's think about this:
Certainly there are nodes within the network which act solely as clients. They have closed all of their ports, and simply use that node to browse other people's content. If they need to use their IP as a persistent identity to interact with services which use that identity for authorization, then they are not suitable candidates for a dynamic IP.
Likewise, servers require some degree of consistency, unless they choose to abstract away their ip address in favour of a DNS solution, which, as we all know, has its fair share of issues.
Many of us have probably heard the term
Client-Server-Model so many times that it's easy to forget there are other roles in the network. We are ignoring the routers: nodes which perhaps act as relays in a physical mesh, to propogate other node's messages, while not generating any of their own. These nodes exist primarily as a common resource. While they require a physical address in order to connect to other nodes and act as a relay, their own identity is not important.
If we consider the narrative I presented above, and how the apparent neighbourhoods of the network can end up with very different borders depending on the order of their connection, it should be apparent that there may be situations in which it would be beneficial to the network as a whole for nodes to renegotiate the terms of their connections. So long as all affected parties can agree that the effects are not to their detriment, there should be an established protocol for doing so.
For example, a relay node accrues a number of peers, each of which has its own peers. No negotiation takes place. Over time, however, some of those peers go away, until only one two remain: another relay, which acts as a path to the rest of the network, and a leaf node. If the node in question is not vested identity, nor a destination node, it is in everybody's interest for this node to recognize that fact, and offer to its relay node to renogotiate its IP to better reflect the network's topology.
The only overhead for this process would be that the new parent node would likely have to send a message to its peers informing them that one of its peers has undergone a nymshift. This could be a recursive process, with nodes depending on the shifting node subsequently following suit, if it suits their use-case. The decision to do so would be made internally, and would be dependent on the operator of that node having set a flag IS_STRICTLY_A_ROUTER to true.
This problem can be formally analyzed using game-theoretic models, assuming that the routing node has no vested interest in the outcome, and that it is behaving altruistically in the context of several other non-cooperative agents.
Any particular path can not be considered solely on the basis of the number of hops. Any edge is also constrained by the fact that it represents a physical link between two locations.
It takes a non-trivial amount of time for any information to be transmitted between these two points. Furthermore, once that information is received, there are various computational processes that introduce additional delays. We can determine the difference between the total time and the respective times of the component processes rather easily, but in most foreseeable cases, we are only concerned with the total latency of any given path.
If all nodes were components of a homogenous network, that is, consisting entirely of one medium, be it a wire, a radio signal, or an optical signal, we might be in a better position to make assumptions about what kind of heuristics we could safely apply. Given that our test network is not homogenous, our job is significantly more difficult.
We could use a greedy search algorithm to traverse the graph based on latency alone. It might build paths piece by piece, taking the shortest (latency-wise) steps and