Odlyzko-Tilly-Raymond scaling

I’ve been ill with influenza and bronchitis for the last week. Maybe this needs to happen more often, because I had a small but fundamental insight into network scaling theory a few minutes ago.

I’m posting it here because I think my blog regulars cast a wide enough net to tell me if I’ve merely rediscovered a thing in the existing literature or, in fact, nobody quite got here before.

Back in the naive days of the dot-com boom people used to talk about Metcalfe’s Law: the value of a node of n networks scales as O(n**2). This heuristic drove a lot of the early excitement about Internet-related stocks.

But then the boom busted, and in 2006 along come Odlyzko and Tilly to explain that. What they said is: empirically, networks actually seem to top out at O(n log n) value, and this is superlinear but way lower than O(n**2), and thus dot-com boom fall down go bust.

You can read the Odlyzko/Tilly paper here. It’s good; it’s lucidly written and deserves its status as a seminal classic. The explanation of O(n log n) that the authors give is that in a world where not all connections have equal value, people build only the connections with the best cost-benefit ratio, and due to an effect called the “gravity law” the value of traffic between any two nodes falls off superlinearly with distance. This produces a substantial disincentive to build long-distance links, leading to a network of clusters of clusters with O(n log n) link density and value scaling.

After Odzylko/Tilly, complexity theorists looked at real-world networks and found that they frequently evolve towards a topology that is self-scaling or fractal – clusters of clusters at any scale you examine. Circulatory systems in the body, neural networks in the brain, road and rail networks in human cities, the Internet itself – over and over, we find self-scaling nets anywhere evolution is trying to solve optimal-routing problems.

So here is my small stone to add to the Odlyzko/Tilly edifice: their assumption in 2006 was stronger than it needed to be. You still get selective pressure towards an O(n log n) self-scaling network even if the cost of connections still varies but the value of all potential connections is equal, not variable. The only assumptions you need are much simpler ones: that the owner of each node has a finite budget for connection-building small in relation to the cost of providing links to all nodes, and that network hops have a nonzero cost.

To see why, we need to recognize a new concept of the “access cost” of a node. The value of a node is by hypothesis constant: the access cost is the sum over all other nodes of any cost metric over a path to each node – distance, hop count, whatever.

in this scenario, each node owner wants to find the best links to the network, but the valuation minimizes access costs . Under this assumption, everyone is still trying to solve an optimal routing problem, so you still get self-scaling topology and O(n log n) statistics.

That’s it. Same result as Odlyzko/Tilly but with weaker assumptions. Put their case and my case together and you have this:

The value of a network of n nodes will rise as O(n log n) under the following assumptions: (a) node hops have variable costs, and (b) each node owner has a small budget.

Is this in the literature anywhere? If not, I guess it’s worth my time to do a more formal writeup.

30 comments

  1. It seems to me you’re adding an unnecessary assumption to Odlyzko-Tilly, not removing one. Their paper doesn’t require as a hypothesis that connection values are unequal, it simply permits that they may be. Either way, as a consequence of their “gravity” law, the unequal costs of those connections generally leads to unequal profit in forming them and from there you get your tendency toward O(n log n) network complexity.

    1. >Their paper doesn’t require as a hypothesis that connection values are unequal, it simply permits that they may be.

      Huh? I’ll reread, but I thought the meat of their argument was that node owners would build out to the nodes with the highest value.

    2. This looks pretty definitive to me:

      The fundamental fallacy underlying Metcalfe’s and Reed’s laws is in the assumption that all connections or all groups are equally valuable. […] In general, connections are not used with the same intensity (and most are not used at all in large networks, such as the Internet), so assigning equal value to them is not justified. This is the basic objection to Metcalfe’s Law

      You are correct that they also talk about the “gravity law” in a way that might be taken to imply the O(n log n) conclusion, but that’s not the form their actual argument actually takes.

  2. Somewhat off-topic thought: nationalism is often superior to globalism because it is more fractal.

  3. @esr: Back in the naive days of the dot-com boom people used to talk about Metcalfe’s Law: the value of a node of n networks scales as O(n**2). This heuristic drove a lot of the early excitement about Internet-related stocks.

    But then the boom busted, and in 2006 along come Odlyzko and Tilly to explain that. What they said is: empirically, networks actually seem to top out at O(n log n) value, and this is superlinear but way lower than O(n**2), and thus dot-com boom fall down go bust.

    There’s a simpler explanation for the dot-com bust, though the Odlyzko and Tilly paper looks valid.

    The dot-com boom was predicated on the assumption that the Internet was a whole new paradigm, and stock values would continue to ascend into the stratosphere and beyond without niggling little things like actual revenues and profits to support them. It was magical thinking at its finest.

    Netscape’s Marc Andreessen complained back when after getting grilled by stock analysts in an investor conference call about how he planned to generate revenues and make actual money, since Netscape had IPOed but had yet to do so, with no indication it ever would. He said “They don’t understand our business model!” Gee, Marc. You’re the CEO. It’s your job to explain it to them. Either you’ve failed to explain it, or you don’t in fact have a business model…

    And legendary investor Warren Buffett, head of Berkshire Hathaway, commented back then that if he ever gave a graduate finance course, the final exam would be one question – provide a value for an Internet company – and any student who actually provided one would flunk.

    (Andreesen is a principal at a venture capital firm these days. I wonder if he still believes what he did when he was running Netscape. I hope not…)

    Value gets measured in money, and the dot-com boom went bust as investors rethought their assumptions and concluded they weren’t likely to make any under the prevailing assumptions. The underlying support for stock values simply didn’t exist, and lots of hot air got let out of balance sheets in a short time frame.

    The laws you cite may define potential value, but actual value will have other factors affecting it.
    ______
    Dennis

  4. I remember reading the Odlyzko/Tilly paper around the time it was first published. The problem with its overly “just-so” story about the dot com bust is an echo of its larger problem, which is the notion of value. Networks, their owners, and the costs of care and feeding associated with them, suffer from obvious distributed/concentrated costs/benefits. This is a much easier way to explain why some network “owners” resist combination with others, rather than some notional gap between O(n**2) and O(n ln n) of “value” being “produced.” This problem is further compounded by distortions introduced by regulatory regimes and institutional capital markets. In the latter, for example, can be found all the “explanation” needed for the dot com mushroom cloud – institutional investors take a complex market and introduce valuation models (e.g. NPV per marginal subscriber added) upon which they rely to make large bets in equity markets. It turns out, however, that the extreme dimensionality reduction which necessarily attends these blunt heuristics causes wild over- and under-estimation of a business’s prospects, and produces the sorts of crazy distortions and market manias.

    There are other secondary effects which disconnect the “value” which is perceived throughout a network. For example, the economic calculus of the notional owners of various nodes in the network (where, presumably, the build-out decisions are being analyzed and made) has very little to do with the behavioral incentives and trade-offs which inform downstream “leaves.” For example, I suspect the decision matrix of Verizon when evaluating capex on their FiOS platform is different than the marginal utility derived, or the value extracted, by my kids on their devices playing Minecraft… This is especially true when mature markets evolve from an inflationary growth and uptake adoption phase to something closer to “zero-sum” market share patterns, like, for example, domestic internet connectivity.

    As with many human artifacts at larger scale, value aggregation/creation models which have a blind spot for distributed costs and concentrated benefits (or the inverse) will make bad guesses about actors’ actions.

    1. >As with many human artifacts at larger scale, value aggregation/creation models which have a blind spot for distributed costs and concentrated benefits (or the inverse) will make bad guesses about actors’ actions.

      All your criticisms are apposite. There is, nevertheless, a powerful argument that the authors were on to something, and that is the ubiquity of networks with self-scaling O(n log n) networks in evolved solutions to routing problems.

  5. So this explains power-law distributions more fundamentally? On what I think might be a related subject of a theory of money, it is time to let you know I am writing about your Inverse Commons in relationship to Nash’s Ideal Money. Ideal Money being the basis of the open source Bitcoin experiment. I’d suggest also reading at least this far back. Apologies in advance if I’ve wasted your time. I’d like to apply a more detailed thought process to this when I have more time.

  6. At risk of overcomplicating a model with preponderate first-order effects, there may be something else going on: the greater the number of connections, the less absolute value of any one connection. Any one account in your Twitter feed adds less value if you follow 1000 people as opposed to only 20.

    This may be mathematically no different from modeling limited networking resources, though.

  7. It would seem to me that the long history of the evolution of neural and circulatory systems in all living things would be the clearest exemplar of a fundamental “law” of optimized network functionality. Conservation of resources while maximizing function and resilience has been wrought via the existential gauntlet of billions of years of trial and error. Match an equation to this natural phenomena and you have to be in the ballpark at least.

  8. I do think that Odlyzko/Tilly were on to something, insofar as Metcalfe’s quadratic notions of value clearly overshoot at some point. It turns out that O/T’s model also peels off to the upside at some node count. It can be seen that these models are quite accurate when applied to the appropriate point in the lifecycle of the network (or network neighborhood) in question. When 3COM (and ethernet generally) was hoovering up market share from the previously-fragmented menagerie of ARCnet, token ring, etc., Metcalfe and the market were on a part of the curve which very much behaved O(n^2). When the market over-built in the dot com run-up, the O/T observation of O(n ln n) was more accurate.

    One of the sectors we work in is mobile telephony, and we model subscriber uptake with a Gompertz function, which is a sigmoid curve that has regional characteristics which are locally O(n^2) and O(n ln n). When we were working on voice and internet connectivity over mobile telephony networks in rural India in the last decade, we saw this sigmoid behavior very closely matched our observed adoption curve.

    This is where the fractal property of network (and network value) growth comes into play, and is quite likely to match an intuitive personal experience of perceived value. For example, the local sigmoid curve (on a more “leftward” lifecycle position) of those people in India for whom our network connection was a century-skipping value creator was clearly exponential, but probably didn’t move the needle much in, say, Malvern PA, whose denizens are much more “rightward” on their local sigmoid, perhaps where O(n ln n) might overestimate the marginal value creation. Many readers here tend to be, perhaps, early adopters in the network connectivity realm, and might remember when higher capacity broadband-class connections first arrived in their neighborhood – glorious bandwidth, like drinking from a firehose! Flash forward a year or two to when more have joined the party, and one’s connection is now choppy and inconsistent as the neighbors pound the network with downloads of the latest Game of Thrones or Debbie Does Andromeda VII, and the visceral reality of the sigmoid can be felt. Quantum or large step improvements in technology generations have the effect of “pulling” large portions of the network back to the left on the curve, away from the value stagnation of the horizontal asymptote. This same resetting effect happens to the inter-nodal value bleed-off of the “gravity effect”: the pattern of value decrease may be fractally self-similar, but the absolute values in question will be radically different in, say, dollar terms (which is why business sink the massive capital into such efforts at all). A network provider buries capital into nodes to push early adopters into a point on the curve where Metcalfe’s model underestimates marginal value creation, and milks the local curve through the O/T superlinear area, and tries to make future capital improvement/strategic technology adoption decisions to optimize and “surf” the curve in the most profitable regions.

    When the last toaster in Trashcanistan is connected to the internet, the toaster (or its owner) may perceive a huge bump in value locally, but it may be that the “global” network value function is well within the region dominated by that asymptote. I perceive that Eric’s refinements are implicitly pointing to the local/global fractal property, and perhaps could be enhanced by noticing that value creation is described by a sigmoid curve dependent on node count assuming that (a) node hops have variable costs which exist in a local value-creation context, and (b) each node owner has a small budget formulated in and responsive to that same local context.

    Eric, thanks for a really thought-provoking post – cheers!

    1. >Eric, thanks for a really thought-provoking post – cheers!

      And thank you for that very interesting last response – you’ve given me a lot to think about there.

  9. @BCD —

    That looks to me more like a specific example of the universal limiting case — that opportunity costs never go away.

  10. > When 3COM (and ethernet generally) was hoovering up market share from the previously-fragmented menagerie of ARCnet, token ring, etc., Metcalfe and the market were on a part of the curve which very much behaved O(n^2).

    Early Bitcoin market cap value correlated with adoption^2 measured as unique addresses (i.e. nodes between transactions).

    Eric, also Nick Szabo pointed out that the potential value of a network is the inverse of the cost of transport to the fourth power.

  11. > the potential value of a network is the inversereciprocal of the cost of transport to the fourth power.

    Also hope you feel better asap. Sucks being sick. I was sick (as in delirium) for 6 years. You’re getting up there in age a bit. I noticed in your JUG video several years ago you were coughing. Hope you are taking care of yourself. I saw you mention in a prior blog being possibly open to a move to a warmer climate if the economic opportunity presents itself. Hope the stars align for you and Cathy soon.

  12. Robert Willis’s comment is indeed quite thought provoking, and stimulated this corollary thought in me (which perhaps may be obvious to everyone else, but which I had not thought about):

    A missing consideration in these evaluations of network value is the extent to which a new type of network (internet connectivity, in this case) is for certain purposes redundant to existing networks of similar, equal or lesser cost that may already exist among some of the nodes.

    If you consider the people and entities (business, non-profit, social, governmental, etc.) that consume the network services offered as the notes, the internet connects equipment those users can use for various purposes, some of which are already served by various other networks.

    If those other purposes are served well and cheaply by older networks, and if those purposes are particularly salient for those users, this diminishes the opportunity value of the new network connection. And, conversely, for people and entities relatively disconnected from any pre-existing networks, the new links have greater value.

    Looked at this way, perhaps the proper context for analysis of the network is to look at the links of facilitated interactions between people and entities. Those interactions, as I see them, are primarily communications of some sort, and transactions (which are also communications). The communications come in typical bundles (conversation, video chat, code upload / download, recorded video or audio, piano lesson, etc.), and transactions are the familiar possibilities in the universe of economic exchange. Each possible type of interaction between any two nodes facilitated by any means is a link in this network. Each of those links has a cost and a latency (e.g. compare driving to your piano teacher’s house vs. doing the lesson over Skype, but also compare walking next door if the teacher happens to live there).

    If you don’t consider the pre-existing networks (telephony, roads, rail, air travel, postal mail, snail mail, international banking, catalogs, etc.) that facilitate these interactions, how can you assess the value of a new network type?

  13. @Steven It’s not so much opportunity cost, since you can have any number of connections to a given node. A better economic term would be inflation – pumping out currency decreases the value of a banknote. Opportunity cost does pretty accurately describe what Eric was talking about, where the sheer number of connections is less import than how many resources you apply to each one.

  14. O/T assigned a descriptive model where nodes or their connections are assumed to have unequal value without any model for why they do. Eric posited a generative model wherein communication has a space-time frictional cost. Subsequent commentary has pointed out that the more generalized generative model is that networking (in the generalized conceptualization of communication and/or group formation) has a myriad of genres of opportunity cost (e.g. even political opportunity cost in cooperative games theory), so this can account for preferences in group formation which may in some cases be independent of physical transport costs.

    Something else occurred to me while reading the O/T paper before reading Robert Willis’s thoughts, and I think combining the opportunity cost generalization with the following insight might model his point. Note that if the possible connections between nodes are limited by opportunity cost weighted compatibility of groups of nodes, then we can approximate a model of the network as connections between groups (aka clusters) of nodes. In this case, the equations for relative value of network mergers changes such that it is possible for the value proposition to invert between small and larger networks, if the larger network has fewer groupings (on an opportunity cost potential connections weighted basis). O/T mentioned clusters but in the context of their descriptive model of assumed unequal value. The key point of opportunity cost is that value is relativistic to the observer. The highly relativistic model is capable of higher-order effects such as those described by Robert Willis. Demographics matter.

    I want to investigate whether Verlinde’s entropic force emergent information based gravitation model is applicable and perhaps a generative mathematical foundation.

  15. “I want to investigate whether Verlinde’s entropic force emergent information based gravitation model is applicable and perhaps a generative mathematical foundation”

    That connection I am curious to see!

  16. @Winter, it’s just an intuitive hunch so perhaps nothing will come from it, or perhaps it might be unifying and generative. The fractal nature and the strange attractor from Chaos theory may somehow be related. I have no free bandwidth to attempt to assimilate it now. Perhaps one of you polymath geniuses can.

    Another tidbit, potentially a clue. Martin Armstrong wrote:
    > Markets are fractal. So whatever you see on only level of time, must exist on all levels of time or else it is not real. The Dow made a Yearly FALSE MOVE on a number of occasions.

    I suspect Armstrong is not a kook nor charlatan, although not being entirely open source it is impossible to be sure. He claims his methods are machine learning driven with $billions of archaeological data (e.g. ancient coins collector) aggregated into the model.

  17. Offhand, I would have guessed that the fractal structure of biologically evolved solutions has a lot to do with the way evolution reuses code to make the same fixed-size spec control the structure at many different levels. This is also why symmetry is so prevalent.

    Of course, there could still be a more fundamental mathematical principle at work here, which evolution was lucky enough to be able to exploit. But the hack of repurposing one chunk of DNA for new purposes seems pretty fundamental to how evolution actually seems to work, so I might be inclined to slightly lower my belief that biological networks are themselves evidence of the fundamental mathematical truth.

    1. >Offhand, I would have guessed that the fractal structure of biologically evolved solutions has a lot to do with the way evolution reuses code to make the same fixed-size spec control the structure at many different levels.

      That’s an intelligent conjecture, but it doesn’t explain why human transport networks also very frequently look self-scaling, and are more likely to as they mature. It’s not expensive for humans to use different planning rules at different scales, and yet…

  18. > it doesn’t explain why human transport networks also very frequently look self-scaling, and are more likely to as they mature.

    Sure. I didn’t mean to argue with the theory, or denigrate the evidence from non-DNA-based structures, merely to suggest that there are confounders that make DNA-based structures less than ideal evidence for the theory.

    Or, alternatively, an interesting clue: If biologically evolved networks have this property, but are sort of constrained to by the way DNA evolution seems to work, does this provide any hints about the broader question? I’m not sure that it does; as you say, in principle human designers don’t have the same constraints. But it’s striking that DNA seems to have up with the same solution not in spite of its constraints, but rather because of them.

    But I’m just spinning ideas now, out of not much.

  19. Just quickly skimmed the paper now, can’t be bothered to read much of Odlyzko who’s been dead wrong about some important stuff. I think the issue of which growth curve to use depends on what kind of value we’re talking about here. There’s access value, where you are now able to collaborate with anyone in the world through the network, particulary if you’re willing to use low-bandwidth text. That obviously follows Metcalfe’s law and likely dominates for small networks, which is what Metcalfe was installing when at 3com.

    As the network gets larger, you don’t have the personal bandwidth nor inclination to communicate with everyone, so you have to start filtering and creating smaller clusters, leading to Zipf’s law coming in for the communication value. This explains the sigmoid curve that Robert Willis posited above.

    Now, the great breakthrough of the internet is still from the access value, because you can create these clusters with anyone from anywhere on the network. But the communication value can never reach that theoretical limit, because you are always limited by time and interest.

    The paradigmatic example here is actually open source, which was the first major collaborative phenomenon to be built on and driven by the internet. Some random programmer from India can contribute to the linux kernel, exhibiting the access value of n^2 while always being limited by the pool of programmers who want to code for free, ie n log n communication value.

    Now that most of the developed world is online and the n^2 access value is mostly realized, what’s interesting is how we maximize that communication value by creating filtering or matching software to help virtual communication clusters develop. Economists are currently thinking about this and its implications, take this blog post today talking about how the current matching software has impacted all kinds of communities. I think his analysis is largely correct.

    We have a long way to go to really develop such filtering software, it is very much in it’s infancy. A dumb way to do it is what facebook or linkedin does, simply moving our physical networks online and going through a friend of a friend. But nobody has really done it well, and we have a long way to go to unlock even a fraction of these effects, ie we are far, far away from these theoretical limits. We will come closer as the networks get faster, to allow even video conferencing access with anyone on the network, and matching software gets much more sophisticated.

  20. @Lima, reconciling your comment together with my prior one, I posit that fungible money will decline in utility relative to Eric’s Inverse Commons. The discussion of the interplay with Bitcoin and Nash’s ideal money might be intriguing as well.

  21. I find your explanation confused between the cost/benefit of a node, and the cost/benefit of a connection between two nodes.

    > the value of all potential connections is equal, not variable

    Here you mention the value of a connection.

    >The value of a node is by hypothesis constant

    Here you mention the value of a node.

    I am guessing you meant the value of a node. For example, a node may have high value because it provides a useful service (such as an online encyclopedia, or website for shopping). Your insight would be that you would get a certain kind of network even if all nodes were of equal value.

    >the value of a node is by hypothesis constant: the access cost is the sum over all other nodes of any cost metric over a path to each node – distance, hop count, whatever.

    This makes sense. Here “access cost” refers to the cost of accessing the other nodes from a node. (This shouldn’t be confused with the cost of accessing said node *from* all other nodes – call it the reverse access cost. The sum of all the access costs and the sum of all the reverse access costs are equal, of course.) In addition, a potential connection would have a value in the reduction of the access cost of a node. These would not all be equal, depending on what connections already existed in the network. But the *costs* of potential connections could be equal, depending on what had to be done to make the connection (put up telephone wires or whatever). I would guess that operators of nodes would choose to establish connections with the highest values and the lowest costs.

  22. I’ve read the Odlyzko-Tilly paper now and it seems to be a lot more limited in scope. It refers to a network of n nodes where any node can communicate with any other node. It doesn’t say anything about the structure of this network in terms of what physical connections exist to allow communication.

    It does refer to ISP’s peering, but their analysis only considers the effect of the subscribers of the ISP’s becoming able to communicate with each other. Once the two ISP’s are peered, they are considered as one large, completely-connected network, not a network with two highly-connected sub-networks. In addition, it does not consider the benefit of being able to make connections through another ISP’s network to a node outside that network using that ISP’s external connections.

    Hence, I don’t see that this paper says anything about “clusters of clusters” in a network, fractal structure, or people choosing to build connections with the best cost/benefit ratio.

    >in a world where not all connections have equal value, people build only the connections with the best cost-benefit ratio

  23. I don’t think your analysis holds up. Indeed I can prove it doesn’t. It may be that networks tend toward the fractal network of networks model you suggest but the value of a network (i.e. utility/economic value provided to society) goes as n^2 if every connection has the same value. All one needs to derive Metcalf’s law is that the value of a connection between any two nodes is a constant c which (by simply counting argument) implies that the value of a network with n nodes is c*n(n-1)/2.

    By restricting which links get built (via economic factors) you might affect how many nodes end up on a network or what structure it takes but that doesn’t affect the overall societal value provided by the network.

    I think the problem is that when you introduce the notion of access cost you are implicitly rejecting the assumption that ALL CONNECTIONS (regardless of number of hops, distance, etc..) between pairs of nodes have the same constant value. If you really adopt the assumption that all connections have the same value then everyone ignores number of hops, distance etc.. because that doesn’t affect network value.

    In other words by assuming that network operators try to minimize number of hops (or whatever metric) your implicitly reintroducing the idea that some connections (those with more hops, longer ones etc..) are worth more than others. This is true in the real world but it means your analysis collapses back into the original.

  24. No take that back your analysis isn’t explicit enough for me to identify the problem. I’m not even sure what your nlog(n) figure is supposed to be measuring (it’s not sum of connection value over all connections).

    Maybe what you are trying to claim is that the Sum of connection value over connections – Sum cost over connections goes like n*log(n).

    But this can’t be true in general. Consider the model where nodes occur on the plane at integer coordinates connections follow the manhattan metric (no diagnols) and cost proportional to their length. It is easy to show that an k x k region can be fully connected with < k^2 connections. Thus a single network provider on this model can build a network with value proportional to n^2 with cost proportional to n (n=k^2) thus (in the limit) value -cost = O(n^2).

    Adding multiple providers makes no difference because no provider will build a link to another unless that link is at least as valuable (value -cost) as simply building an isolated network. So there is a formal model in which cost of a link varies semi-realistically (as per distance) but we should see network value – cost go as n^2.

    Maybe you could make your claim more precise.

  25. > All one needs to derive Metcalf’s law is that the value of a connection between any two nodes is a constant c which (by simply counting argument) implies that the value of a network with n nodes is c*n(n-1)/2.

    But the marginal value of being able to reach new nodes isn’t the same as that of the previous nodes. If The Bride of Monster is already able to get to a hundred sites that let her look at pictures of cute puppies, how much value is added by giving her access to a hundred more? I think a case can be made that the total value of each node’s access to n other nodes is on the order of ln(n) , so that the total network value indeed scales more like n*ln(n).

Leave a comment

Your email address will not be published. Required fields are marked *