Nodes closer in the hidden space, i.e., more similar, are connected in the network topology with higher probability. Specifically, the distance between two nodes in the hidden space reflects similarity in their intrinsic attributes, functional or structural. By metric we mean that for each pair of nodes there is a non-negative distance between them defined, satisfying the triangle inequality. Instead they use internal attributes of individual nodes to impose some navigable shape onto the network. We call our spaces hidden because they play the role of an underlying coordinate system not readily observable by examining the network topology. In our approach, we generalize the local graph to be a hidden metric space. The long-range links exist with some probability, which depends on the shortest path distance between nodes in the subgraph composed of local links. In that explanation, a network consists of two types of links: local and long-range. Our framework generalizes Kleinberg's seminal explanation of Milgram's experiments. In our previous project, primarily motivated by Internet routing scalability problems, we focused on this question, and introduced a new theoretical framework to support its study. If man-made complex networks such as the Internet have a structure similar to so many networks in nature that can effectively route without global topology awareness, can network routing research take advantage of this efficiency? Milgram's experiments showed another classic demonstration of efficient routing without exchange of connectivity status information: humans can find paths to destinations through their social acquaintance network, even though no human has global knowledge of its structure. #Jmol reappear after hiding full#Our brains are a humbling example-to function they must successfully transmit specific signals to appropriate places in the body, but no neuron has a full view of global inter-neuron connectivity. So we find it irresistibly interesting that so many other real networks in nature somehow "route traffic" efficiently without any global view of the system, i.e., nodes do not propagate any information about their connectivity status, but they efficiently find intended communication targets anyway. The fundamentally unscalable overhead associated with this information exchange is built into our primary communication technologies today, including the Internet. Conventional wisdom holds that finding communication paths to specific destinations through a network requires continually exchanging information about the status of connectivity between all nodes. This discovery poses a formidable intellectual challenge for network science and engineering. Our proposition is further strengthened by an astonishing fact: the peculiar structural characteristics of the Internet turn out to be eerily consistent with other complex networks found in nature, in particular with networks that exhibit naturally efficient, if not optimal, routing behavior without any global knowledge of network structure, e.g., neural and social networks. If the goal is to increase not only our understanding of complex networks, but also our ability to predict and engineer them, we believe the most promising direction is to study how this peculiar macroscopic structure relates to the function(s) of the network. The story is strikingly similar with the Web. We are faced with an unsettling truth: the Internet topology has acquired emergent large-scale properties that are beyond our full understanding, much less control. The Internet offers a paradigmatic example: nothing inherent in the design of the Internet architecture can explain the Internet's peculiar complex 1 large-scale structure, unexpectedly discovered decades after its inception. The lack of predictive power over complex systems, either designed by humans or evolved by nature, is a foundational problem in contemporary science.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |