incognito david eagleman pdf download free

2010 : HOW IS THE INTERNET CHANGING THE WAY YOU THINK?

nicholas_a_christakis's picture

Sterling Professor of Social and Natural Science, Yale University; Co-author, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives

Efforts to change the way we think—and to enhance our cognitive capacity—are ancient. Brain enhancers come in several varieties. They can be either hardware or software, and they can be either internal or external to our bodies. External hardware includes things like cave paintings, written documents, eyeglasses, wristwatches, wearable computers, or brain-controlled machines. Internal hardware includes things like mind-altering substances, cochlear implants, or intra-cranial electrical stimulation. Internal software includes things like education, meditation, mnemonics, and cognitive therapy. And external software includes things like calendars, voting systems, search engines, and the Internet.

I've had personal experience with most of these—save cave painting and the more esoteric forms of hardware—and I think I can say with confidence that they have not changed my brain.

What especially attracts my attention, though, is that the more complex types of external software—including the Internet—tend to involve communication and interaction, and thus they tend to be specifically social: they tend to involve the thoughts, feelings, and actions of many individuals, pooled in some way to make them accessible to individuals, including me. The Internet thus facilitates an age-old tendency of the human mind to benefit from our tendency as a species to be homo dictyous (network man), an innate tendency we all have to connect with others and to be influenced by them. In this regard, the Internet is both mind-expanding and atavistic.

The Internet is no different than previous (equally monumental) brain-enhancing technologies such as books or telephony, and I doubt whether books and telephony have changed the way I think, in the sense of actually changing the way my brain works (which is the particular way I am taking the question before us). In fact, I would say that it is much more correct to say that our thinking gave rise to the Internet than that the Internet gave rise to our thinking. Another apt analogy is perhaps mathematics. It has taken centuries for humans to accumulate mathematical knowledge; and I learned geometry and calculus in high school in a way that probably would have astonished mathematicians just a few centuries ago. But, like other students, I did this with the same brain we've all had for millennia. The math surely changed how I think about the world. But did it change the way I think? Did it change my brain? The answer is mostly no.

To be clear, the Internet is assuredly changing quite a few things related to cognition and social interaction. One widely appreciated and important example of both is the way the Internet facilitates hive-mind phenomena, like Wikipedia, that integrate the altruistic impulses and the knowledge of thousands of far-flung individuals. To the extent that I participate in such things (and I do), my thinking and I are both affected by the Internet.

But most thinking serves social ends. A strong indicator of this fact is that the intellectual content of most conversation is trivial, and it certainly is not focused on complex ideas about philosophy or mathematics. In fact, how often—unless we are ten-year-old boys—do we even think or talk about predators or navigation, which have ostensibly been important topics of thought and conversation for quite some time? Mostly, we think about, and talk about, each other. This is probably even true for those of us who spend our lives as scientists.

Indeed, our brains likely evolved their capacity for intelligence in response to the demands of social (rather than environmental) complexity. The evolution of larger social groups among primates required and benefited from the evolution of a larger neo-cortex (the outer, thinking part of our brain), and managing social complexity in turn required and benefited from the evolution of language. Known as the "social brain hypothesis," this idea posits that the reason we think at all has to do with our embeddedness in social life.

What role might technology play in this? Very little, it turns out. Consider, for example, the fact that the size of military units has not changed materially in thousands of years, even though our communication technology (from signal fires to telegraphy to radio to radar) has. The basic unit in the Roman army (the "maniple") was composed of 120-130 men, and the size of the analogous unit in modern armies (the company) is still about the same.

The fact that effective human group size has not changed very substantially—even though communication technology has—suggests that it is not the technology that is crucial to our performance. Rather, the crucial factor is the ability of the human mind to track social relationships, to form mental rosters that identify who is who, and to form mental maps that track who is connected to whom and how strong or weak, or cooperative or adversarial, those relationships are. I do not think that the Internet has changed the ability of my brain to do this. While we may use the word "friends" to refer to all our contacts online, they are decidedly not our friends, in the truly social, emotional, or biological sense of the word.

There is no new self. There are no new others. And so there is no new brain, and no new way of thinking. We are the same species after the Internet as before. Yes, the Internet can make it easy for us to learn how to make a bomb or find a willing sexual partner. But the Internet itself is not changing the fundamental reality of my thinking any more than it is changing our fundamental proclivity to violence or our innate capacity for love.

I still remember typing essays on a much loved typewriter in my first year of university. Then the first computer, the first email account, the slow yet fluid entry into a new digital world that felt strangely natural. The advent of the Internet age happened progressively, we saw it develop like a child born of many brains, a protean animal whose characteristics were at once predictable and unknown. As soon as the digital sphere and became a worldwide reality recognizable as a new era, predictions and analyses about it grew. Edge itself was born as the creature was still growing new limbs. The tools for research and communication about this research developed along with new thinking about mind-machine interaction, about the future of education, about the impact of the Internet on texts and writing, about the issues of filtering, relevance, learning and memory.

And then somehow the creature became autonomous, an ordinary part of our universe. We are no longer surprised, no longer engaged in so much meta-analysis: we are dependent, some of us are addicted to this marvelous tool, this multi-faceted medium that is — as predicted even ten years ago — concentrating all of communication, knowledge, entertainment, business. I, like so many of us, spend so many hours before a flat computer screen, typing away, even when surrounded by countless books, that it is hard to say exactly how the Internet has affected me. The Internet is becoming as ordinary as the telephone. Humans are very good at adapting to the technologies we create, and the Internet is the most malleable, the most human of all technologies, just as it can also be intensely alienating from everything we've lived as before now.

I waver between these two positions: at times gratefully dependent on this marvel, at other times horrified at what this dependence signifies. Too much concentrated in one place, too much accessible from one's house, the need to move about in the real world nearly nil, the rapid establishment of social networking Websites changing our relationships, the reduction of three-dimensionality to that flat screen. Rapidity, accessibility, one-click for everything: where has slowness gone, and tranquillity, solitude, quiet? The world I took for granted as a child, and that my childhood books beautifully represented, jerks with the brand new world of artificial glare and electrically created realities, faster, louder, unrelated to nature, self-contained.

The technologies we create always have an impact on the real world, but rarely has a technology had such an impact on minds. We know what is happening to those who were born after the advent of the Internet and for those like me who started out with typewrites, books, slowness, reality measured by geographical distance and local clocks, the world that is emerging now is very different indeed from the world we knew.

I am of that generation for which adapting to computers was welcome and easy, but for which the pre-Internet age remains real. I can relate to those who call the radio the wireless, and I admire people in their 70s or 80s who communicate by email, because they come from further away still. Perhaps the way forward would be to emphasize the teaching of history in schools, to develop curricula on the history of technology, to remind today's children that their technology, absolutely embracing as it feels, is relative, and does not represent the totality of the universe. Millions of children around the world don't need to be reminded of this — they have no access to technology at all, many not even to modern plumbing — but those who do should know how to place this tool historically and politically.

As for me, I am learning how to make room for the need to slow down and disconnect without giving up on my addiction to Google, email, and rapidity. I was lucky enough to come from somewhere else, from a time when information was not digitized. And that is what perhaps enables me to use the Internet with a measure of wisdom.

simon_baron_cohen's picture

Professor of Developmental Psychopathology, University of Cambridge; Fellow, Trinity College, Cambridge; Director, Autism Research Centre, Cambridge; Author, The Pattern Seekers

Like you, all my email goes into my Sent Mailbox, just sitting there if I want to check back at what I said to whom years ago. So what a surprise to see that I send approximately 18,250 emails each year (roughly 50 a day). Assuming 3 minutes per email (let's face it, I can't afford to spend too long thinking about what I want to say), that's about 1000 hours a year on email alone. I've been on email since the early 90s. Was that time well spent?

The answer is both yes and no. Yes, I have been able to keep in touch with family, friends, and colleagues in far-flung corners of the planet with ease, and have managed to pull off projects with teams spread across different cities in timescales that previously would have been unthinkable. All this feeds my continued use of email. But whilst these undoubted benefits are the reasons why I continue to email, it is not without its own cost. Most importantly, as the above analysis shows, email eats my time just as it likely eats yours. And unlike Darwin's famous 15,000 letters (penned with thought, and now the subject of the Darwin Correspondence Project in my university library in Cambridge), three-minute email exchanges do not deliver communication with any depth and as such are not intellectually valuable in their own right.

And we all recognize that email has its addictive side. Each time a message arrives there's just the chance that it might contain something exciting, something new, something special, a new opportunity. Like all effective behavioural reinforcement schedules, the reward is very intermittent: Maybe one in 100 emails contain something I really want to know or hear about. That's just enough to keep me checking my Inbox, but that means perhaps only 10 of the 1000 hours I spent on emails this year were actually wanted.

Bite-size emails also carry another cost: We all know there's no substitution for thinking hard and deep about a problem and how to solve it, or for getting to grips with a new area, and such tasks demand long periods of concentrated attention. Persistent, frequent email messages threaten our capacity for the real work. Becoming aware of what email is doing to our allocation of time is the first step to re-gaining control. Like other potential addictions we should perhaps attempt to counter the email habit by restricting it to certain times of the day, or by creating email-free zones by turning off Wi-Fi. This year's Edge question at least gives me pause to think whether I really want to be spending 1000 hours a year on email, at the expense of more valuable activities.

richard_dawkins's picture

Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, Books Do Furnish a Life

If, forty years ago, the Brockman Question had been "What do you anticipate will most radically change the way you think during the next forty years?" my mind would have flown instantly to a then recent article in Scientific American (September 1966) about 'Project MAC'. Nothing to do with the Apple Mac, which it long pre-dated, Project MAC was an MIT-based cooperative enterprise in pioneering computer science. It included the circle of AI innovators surrounding Marvin Minsky but, oddly, that was not the part that captured my imagination. What really excited me, as a user of the large mainframe computers that were all you could get in those days, was something that nowadays would seem utterly commonplace: the then astonishing fact that up to 30 people simultaneously, from all around the MIT campus and even from their homes, could simultaneously log in to the same computer: simultaneously communicate with it and with each other. Mirabile dictu, the co-authors of a paper could work on it simultaneously, drawing upon a shared database in the computer, even though they might be miles apart. In principle, they could be on opposite sides of the globe.

Today that sounds absurdly modest. It's hard to recapture how futuristic it was at the time. The post-Berners-Lee world of 2009, if we could have imagined it forty years ago, would have seemed shattering. Anybody with a cheap laptop computer, and an averagely fast WiFi connection, can enjoy the illusion of bouncing dizzily around the world in full colour, from a beach webcam in Portugal to a chess match in Vladivostok, and Google Earth actually lets you fly the full length of the intervening landscape as if on a magic carpet. You can drop in for a chat at a virtual pub, in a virtual town whose geographical location is so irrelevant as to be literally non-existent (and the content of whose LOL-punctuated conversation, alas, is likely to be of a drivelling fatuity that insults the technology that mediates it).

'Pearls before swine' over-estimates the average chat-room conversation, but it is the pearls of hardware and software that inspire me: the Internet itself and the World Wide Web, succinctly defined by Wikipedia as "a system of interlinked hypertext documents contained on the Internet." The Web is a work of genius, one of the highest achievements of the human species, whose most remarkable quality is that it was not constructed by one individual genius like Tim Berners-Lee or Steve Wozniak or Alan Kay, nor by a top-down company like Sony or IBM, but by an anarchistic confederation of largely anonymous units located (irrelevantly) all over the world. It is Project MAC writ large. Suprahumanly large. Moreover, there is not one massive central computer with lots of satellites, as in Project MAC, but a distributed network of computers of different sizes, speeds and manufacturers, a network that nobody, literally nobody, ever designed or put together, but which grew, haphazardly, organically, in a way that is not just biological but specifically ecological.

Of course there are negative aspects, but they are easily forgiven. I've already referred to the lamentable content of many chat room conversations without editorial control. The tendency to flaming rudeness is fostered by the convention – whose sociological provenance we might discuss one day – of anonymity. Insults and obscenities, to which you would not dream of signing your real name, flow gleefully from the keyboard when you are masquerading online as 'TinkyWinky' or 'FlubPoodle' or 'ArchWeasel'.

And then there is the perennial problem of sorting out true information from false. Fast search engines tempt us to see the entire web as a gigantic encyclopaedia, while forgetting that traditional encyclopaedias were rigorously edited and their entries authored by chosen experts. Having said that, I am repeatedly astounded by how good Wikipedia can be. I calibrate Wikipedia by looking up the few things I really do know about (and may indeed have written the entry for in traditional encyclopaedias) say 'Evolution' or 'Natural Selection'. I am so impressed by these calibratory forays that I go, with some confidence, to other entries where I lack first-hand knowledge (which was why I felt able to quote Wikipedia's definition of the Web, above). No doubt mistakes creep in, or are even maliciously inserted, but the half-life of a mistake, before the natural correction mechanism kills it, is encouragingly short. John Brockman warns me that, while Wikipedia is indeed excellent on scientific matters, this is not always so "in other areas such as politics and popular culture where . . . edit wars continually break out." Nevertheless, the fact that the Wiki concept works, even if only in some areas such as science, flies so flagrantly in the face of all my prior pessimism, that I am tempted to see it as a metaphor for all that deserves optimism about the World Wide Web.

Optimistic we may be, but there is a lot of rubbish on the Web, more than in printed books, perhaps because they cost more to produce (and, alas, there's plenty of rubbish there too). But the speed and ubiquity of the Internet actually helps us to be on our critical guard. If a report on one site sounds implausible (or too plausible to be true) you can quickly check it on several more. Urban legends and other viral memes are helpfully catalogued on various sites. When we receive one of those panicky warnings (often attributed to Microsoft or Symantec) about a dangerous computer virus, we do not spam it to our entire address book but instead Google a key phrase from the warning itself. It usually turns out to be, say, "Hoax Number 76", its history and geography meticulously tracked.

Perhaps the main downside of the Internet is that surfing can be addictive and a prodigious timewaster, encouraging a habit of butterflying from topic to topic, rather than attending to one thing at a time. But I want to leave negativity and nay-saying and end with some speculative  – perhaps more positive – observations. The unplanned worldwide unification that the web is achieving (a science-fiction enthusiast might discern the embryonic stirrings of a new life form) mirrors the evolution of the nervous system in multicellular animals. A certain school of psychologists might see it as mirroring the development of each individual's personality, as a fusion among split and distributed beginnings in infancy.

I am reminded of an insight that comes from Fred Hoyle's science fiction novel, The Black Cloud. The cloud is a superhuman interstellar traveller, whose 'nervous system' consists of units that communicate with each other by radio – orders of magnitude faster than our puttering nerve impulses. But in what sense is the cloud to be seen as a single individual rather than a society? The answer is that interconnectedness that is sufficiently fast blurs the distinction. A human society would effectively become one individual if we could read each other's thoughts through direct, high speed, brain-to-brain radio transmission. Something like that may eventually meld the various units that constitute the Internet.

This futuristic speculation recalls the beginning of my essay. What if we look forty years into the future? Moore's Law will probably continue for at least part of that time, enough to wreak some astonishing magic (as it would seem to our puny imaginations if we could be granted a sneak preview today).  Retrieval from the communal exosomatic memory will become dramatically faster, and we shall rely less on the memory in our skulls. At present we still need biological brains to provide the cross-referencing and association, but more sophisticated software and faster hardware will increasingly usurp even that function.

The high-resolution colour rendering of virtual reality will improve to the point where the distinction from the real world becomes unnervingly hard to notice. Large-scale communal games such as Second Life will become disconcertingly addictive to many ordinary people who understand little of what goes on in the engine room. And let's not be snobbish about that. For many people around the world,  'first life' reality has few charms and, even for those more fortunate, active participation in a virtual world could be more intellectually stimulating than the life of a couch potato slumped in idle thrall to 'Big Brother'. To intellectuals, Second Life and its souped-up successors will become laboratories of sociology, experimental psychology and their successor disciplines, yet to be invented and named. Whole economies, ecologies, and perhaps personalities will exist nowhere other than in virtual space.

Finally, there may be political implications. Apartheid South Africa tried to suppress opposition by banning television, and eventually had to give up. It will be more difficult to ban the Internet. Theocratic or otherwise malign regimes, such as Iran and Saudi Arabia today, may find it increasingly hard to bamboozle their citizens with their evil nonsense. Whether, on balance, the Internet benefits the oppressed more than the oppressor is controversial, and at present may vary from region to region (see, for example, the exchange between Evgeny Morozov and Clay Shirky in Prospect, Nov-Dec 2009).

It is said that Twitter is playing an important part in the current unrest in Iran, and latest news from that faith-pit encourages the view that the trend will be towards a net positive effect of the Internet on political liberty. We can at least hope that the faster, more ubiquitous and above all cheaper Internet of the future may hasten the long-awaited downfall of Ayatollahs, Mullahs, Popes, Televangelists, and all who wield power through the control (whether cynical or sincere) of gullible minds. Perhaps Tim Berners-Lee will one day earn the Nobel Prize for Peace.

eva_wisten's picture

When you're on a plane, watching the cars below; the blinking, moving workings of a city, it's easy to believe that everything is connected, just moving parts in the same system. If you're one of the individual drivers on the ground, driving your car from B to A, the perspective is, of course, different. The individual driver feels very much like an individual, car to match your personality, on way to your chosen destination. The driver never feels like a moving dot in a row of a very large number of other moving dots.

The Internet sometimes makes me suspect that I'm that driver. Having the information from so many disparate systems merged (often invisibly), is steering my behavior into all kinds of paths, which I can only hope are beneficial. The visible connectedness through the Web has changed, maybe not how I think, but has increased the number of people whose thoughts are in my head. Because of the Internet, memes and calculations of more people (and/or computers) passes through us. Good or bad, this new level of connectedness sometimes gives me the feeling that if I could only be picked up a few feet over ground, what I would see, is an ant hill. All the ants, looking so different and special up close, seem suspiciously alike from this height. This new tool for connections has made more ants available every time I need to carry a branch, just as there are more ants in the way when I want to get in with the picnic basket.

But, as a larger variety of thoughts and images pass by, as I can search a thought and see the number of people who have had the same thought before me — as more and more systems talk to each other and take care of all kinds of logistics, I do think that this level of connectedness pushed us — beneficially — towards both the original and the local.

We can go original, either in creation or curation, and, if good, carve a new, little path in the anthill — or we can copy one of all the things out there and bring it home to our local group. Some ants manage to be original enough to benefit the whole anthill. But other ants can copy and modify the good stuff and bring it home. And in this marching back and forth, trying to get things done, communicate, make sense of things, I see myself not looking to leaders, but to curators who can efficiently signal where to find the good stuff.

What is made accessible to me through the Internet might not be changing how I think, but it does some of my thinking for me. And above all, the Internet is changing how I see myself. As real world activity and connections continue to be what matters most to me, the Internet, with its ability to record my behavior, is making it clearer that I am, in thought and in action, the sum of the thoughts and actions of other people to a greater extent then I have realized.

anton_zeilinger's picture

Physicist, University of Vienna; Scientific Director, Institute of Quantum Optics and Quantum Information; President, Austrian Academy of Sciences; Author, Dance of the Photons: From Einstein to Quantum Teleportation

Yes, I have learned, like many others,

— to write short e-mails, because people don't want to read beyond line 10.

— to write single-issue e-mails, because any second or third issues get lost.

— to check my e-mails on the i-phone or blackberry every five minutes, because the important message could be arriving at any moment.

— to expect that our brain function will significantly be reduced in the coming decades to very simple decision-making, and so on and so on.

Well, seriously, I find it utterly impressive how the notion of information is becoming more and more important in our society. Or rather, of what we think what information is. What is information? From a very pragmatic operational point of view, one could argue that information is the truth value of a proposition. Is it raining now? Yes/no. Are airplanes flying because they are lighter than air? Yes/no. Does she love me? Yes/no.

Evidently, there are questions which are easier to answer, and others which are very difficult, or maybe even impossible to answer in a reliable way like the last one. While for the first two questions, we can devise scientific procedures how to decide them, even including borderline cases, for the last question, such an algorithm seems impossible, even though some of our biology friends try to convince us that it is just a matter of deterministic procedures in our brains and in our bodies. There are other questions which will forever be beyond any methodical scientific decision procedures, like: Does God exist? Or: Which of the two slits in a double-slit interference experiment does a quantum particle take?

These last two questions are of a very different nature, although both are unanswerable. The question whether God exists is not only beyond any solid scientific argumentation, it must be like that. Any other possibility would be the end of religion. If God were provably existent, then the notion of belief is empty. Any religious behaviour would be mere opportunism. But what about the quantum question? Which of the two paths does a particle take in a double-slit experiment?

We learned from quantum physics that to answer this kind of question, we need to do an experiment which allows us to determine whether the particle takes slit A or slit B. But that, we also learned, significantly modifies the experiment itself. Answering the question implies introducing the specific apparatus which allows us to answer that specific question. Introducing an apparatus which permits to determine which slit a particle takes automatically means that the phenomenon of quantum interference disappears because of the unavoidable interaction with that apparatus. Or, in the picture of the famous Schrödinger cat, asking whether the cat is alive or dead immediately destroys the quantum superposition of the alive and dead states.

Therefore, we here have a completely new situation, not encountered before in science and probably not in philosophy either. Creating a situation where a question can be answered completely modifies the old situation. An experimental quantum setup, or any quantum situation, can only represent a finite amount of information, here either interference or path information. And it is up to the experimentalist to decide which information is actually existing, real, manifest, in a concrete situation. The experimentalist does this by choosing appropriate apparatus. So, information has a very fundamental nature of a new kind not present in classical, non-quantum science.

What does this all have to do with the Internet? Today, we are busy developing quantum communication over large distances. Using quantum communication links, one will connect future quantum computers which work in a completely new complexity class compared to existing computers. To the best of my knowledge, this is the first time that humanity develops a technology which has no parallel at all in the known Universe. There are no quantum computers out there, assuming that the functioning of the brain can, in the end, be explained by non-quantum processes.

What will all that mean for our communication? This is impossible to tell. It is more impossible to tell than the historic fact that it was impossible to predict the applications of inventions like the laser or microchips, just to name two more recent examples. We will be entering a completely new world where information is even more fundamental than today. And it is hoped that, looking back, when the present irritation experienced by many because of the Internet, will appear to have been just an episode in the development of humanity. But maybe I am too optimistic.

anton_zeilinger's picture

Physicist, University of Vienna; Scientific Director, Institute of Quantum Optics and Quantum Information; President, Austrian Academy of Sciences; Author, Dance of the Photons: From Einstein to Quantum Teleportation

Yes, I have learned, like many others,

— to write short e-mails, because people don't want to read beyond line 10.

— to write single-issue e-mails, because any second or third issues get lost.

— to check my e-mails on the i-phone or blackberry every five minutes, because the important message could be arriving at any moment.

— to expect that our brain function will significantly be reduced in the coming decades to very simple decision-making, and so on and so on.

Well, seriously, I find it utterly impressive how the notion of information is becoming more and more important in our society. Or rather, of what we think what information is. What is information? From a very pragmatic operational point of view, one could argue that information is the truth value of a proposition. Is it raining now? Yes/no. Are airplanes flying because they are lighter than air? Yes/no. Does she love me? Yes/no.

Evidently, there are questions which are easier to answer, and others which are very difficult, or maybe even impossible to answer in a reliable way like the last one. While for the first two questions, we can devise scientific procedures how to decide them, even including borderline cases, for the last question, such an algorithm seems impossible, even though some of our biology friends try to convince us that it is just a matter of deterministic procedures in our brains and in our bodies. There are other questions which will forever be beyond any methodical scientific decision procedures, like: Does God exist? Or: Which of the two slits in a double-slit interference experiment does a quantum particle take?

These last two questions are of a very different nature, although both are unanswerable. The question whether God exists is not only beyond any solid scientific argumentation, it must be like that. Any other possibility would be the end of religion. If God were provably existent, then the notion of belief is empty. Any religious behaviour would be mere opportunism. But what about the quantum question? Which of the two paths does a particle take in a double-slit experiment?

We learned from quantum physics that to answer this kind of question, we need to do an experiment which allows us to determine whether the particle takes slit A or slit B. But that, we also learned, significantly modifies the experiment itself. Answering the question implies introducing the specific apparatus which allows us to answer that specific question. Introducing an apparatus which permits to determine which slit a particle takes automatically means that the phenomenon of quantum interference disappears because of the unavoidable interaction with that apparatus. Or, in the picture of the famous Schrödinger cat, asking whether the cat is alive or dead immediately destroys the quantum superposition of the alive and dead states.

Therefore, we here have a completely new situation, not encountered before in science and probably not in philosophy either. Creating a situation where a question can be answered completely modifies the old situation. An experimental quantum setup, or any quantum situation, can only represent a finite amount of information, here either interference or path information. And it is up to the experimentalist to decide which information is actually existing, real, manifest, in a concrete situation. The experimentalist does this by choosing appropriate apparatus. So, information has a very fundamental nature of a new kind not present in classical, non-quantum science.

What does this all have to do with the Internet? Today, we are busy developing quantum communication over large distances. Using quantum communication links, one will connect future quantum computers which work in a completely new complexity class compared to existing computers. To the best of my knowledge, this is the first time that humanity develops a technology which has no parallel at all in the known Universe. There are no quantum computers out there, assuming that the functioning of the brain can, in the end, be explained by non-quantum processes.

What will all that mean for our communication? This is impossible to tell. It is more impossible to tell than the historic fact that it was impossible to predict the applications of inventions like the laser or microchips, just to name two more recent examples. We will be entering a completely new world where information is even more fundamental than today. And it is hoped that, looking back, when the present irritation experienced by many because of the Internet, will appear to have been just an episode in the development of humanity. But maybe I am too optimistic.

"Love Intermedia Kinetic Environments." John Brockman speaking — partly kidding, but conveying the notion that Intermedia Kinetic Environments are In in the places where the action is — an Experience, an Event, an Environment, a humming electric world.

— The New York Times

On a Sunday in September 1966, I was sitting on a park bench reading about myself on the front page of the New York Times Arts & Leisure section. I was wondering whether the article would get me fired from my job at the New York Film Festival at Lincoln Center, where I was producing "expanded cinema" and "intermedia" events. I was twenty-five years old.

New and exciting ideas and forms of expression were in the air. They came out of happenings, the dance world, underground movies, avant-garde theater. They came from artists engaged in experiment. Intermedia consisted more often than not of unscripted, sometimes spontaneous theatrical events in which the audience was also a participant. I was lucky enough to have some small part in this upheaval, having been hired a year earlier by the underground filmmaker and critic Jonas Mekas to manage the Filmmakers' Cinémathèque and organize and run the Expanded Cinema Festival.

During that wildly interesting period, many of the leading artists were reading science and bringing scientific ideas to their work. John Cage gave me a copy of Norbert Wiener's Cybernetics; Bob Rauschenberg turned me on to James Jeans' The Mysterious Universe. Claes Oldenburg suggested I read George Gamow's 1,2,3...Infinity. USCO, a group of artists, engineers, and poets who created intermedia environments; La Monte Young's Theatre of Eternal Music; Andy Warhol's Factory; Nam June Paik's video performances; Terry Riley's minimalist music — these were master classes in the radical epistemology of a set of ideas involving feedback and information.

Another stroke of good luck was my inclusion in a small group of young artists invited by Fluxus artist Dick Higgins to attend a series of dinners with John Cage — an ongoing seminar about media, communications, art, music, and philosophy that focused on the ideas of Norbert Wiener, Claude Shannon, and Marshall McLuhan. Cage was aware of research conducted in the late 1930s and 1940s by Wiener, Shannon, Vannevar Bush, Warren McCulloch, and John von Neumann, who were all present at the creation of cybernetic theory. And he had picked up on McLuhan's idea that by inventing electric technology we had externalized our central nervous systems — that is, our minds — and that we now had to presume that "There's only one mind, the one we all share." We had to go beyond personal mind-sets: "Mind" had become socialized. "We can't change our minds without changing the world," Cage said. Mind as a man-made extension had become our environment, which he characterized as a "collective consciousness" that we could tap into by creating "a global utilities network."

Back then, of course, the Internet didn't exist, but the idea was alive. In 1962, J.C.R Licklider, who had published "Man-Computer Symbiosis" in 1960 and described the idea of an "Intergalactic Computer Network" in 1961, was hired as the first director of the new Information Processing Techniques Office (IPTO) at the Pentagon's Advanced Research Projects Agency, an agency created as a response to Sputnik. Licklider designed the foundation for a global computer network. He and his successors at IPTO, including Robert Taylor and Larry Roberts, provided the ideas that led to the development of the ARPAnet, the forerunner of the Internet, which itself emerged as an ARPA-funded research project in the mid-1980s.

Inspired also by architect-designer Buckminster Fuller, futurist John McHale, and cultural anthropologists Edward T. ("Ned") Hall and Edmund Carpenter, I began to read avidly in the field of information theory, cybernetics, and systems theory. McLuhan himself introduced me to The Mathematical Theory of Communication by Shannon and Weaver, which began: "The wordcommunication will be used here in a very broad sense to include all of the procedures by which one mind may affect another. This, of course, involves not only written and oral speech, but also music, the pictorial arts, the theater, the ballet, and in fact all human behavior."

Inherent in these ideas is a radical new epistemology. It tears apart the fabric of our habitual thinking. Subject and object fuse. The individual self decreates. I wrote a synthesis of these ideas in my first book, By the Late John Brockman (1969), taking information theory — the mathematical theory of communications — as a model for regarding all human experience. I began to develop a theme that has informed my endeavors ever since: New technologies beget new perceptions. Reality is a man-made process. Our images of our world and of ourselves are, in part, models resulting from our perceptions of the technologies we generate.

We create tools and then we mold ourselves in their image. Seventeenth-century clockworks inspired mechanistic metaphors ("The heart is a pump"), just as the self-regulating engineering devices of the mid-twentieth century inspired the cybernetic image ("The brain is a computer"). The anthropologist Gregory Bateson has characterized the post-Newtonian worldview as one of pattern, of order, of resonances in which the individual mind is a subsystem of a larger order. Mind is intrinsic to the messages carried by the pathways within the larger system and intrinsic also in the pathways themselves.

Ned Hall once pointed out to me that the most critical inventions are not those that resemble inventions but those that appear innate and natural. Once you become aware of this kind of invention, it is as though you had always known about it. ("The medium is the message." Of course, I always knew that).

Hall's candidate for the most important invention was not the capture of fire, the printing press, the discovery of electricity, or the discovery of the structure of DNA. The most important invention was ... talking. To illustrate the point, he told a story about a group of prehistoric cavemen having a conversation.

"Guess what?" the first man said. "We're talking." Silence. The others looked at him with suspicion.

"What's 'talking'?" a second man asked.

"It's what we're all doing, right now. We're talking!"

"You're crazy," the third man said. "I never heard of such a thing!"

"I'm not crazy," the first man said. "You're crazy. We're talking."

Talking, undoubtedly, was considered innate and natural until the first man rendered it visible by exclaiming, "We're talking."

A new invention has emerged, a code for the collective conscious, which requires a new way of thinking. The collective externalized mind is the mind we all share. The Internet is the infinite oscillation of our collective conscious interacting with itself. It's not about computers. It's not about what it means to be human — in fact it challenges, renders trite, our cherished assumptions on that score. It's about thinking. "We're talking."

marissa_mayer's picture

It's not what you know, it's what you can find out. The Internet has put at the forefront resourcefulness and critical-thinking and relegated memorization of rote facts to mental exercise or enjoyment. Because of the abundance of information and this new emphasis on resourcefulness, the Internet creates a sense that anything is knowable or findable — as long as you can construct the right search, find the right tool, or connect to the right people. The Internet empowers better decision-making and a more efficient use of time.

Simultaneously, it also leads to a sense of frustration when the information doesn't exist online. What do you mean that the store hours aren't anywhere? Why can't I see a particular page of this book? And, if not verbatim, no one has quoted it even in part? What do you mean that page isn't available? Page not found?

The Internet can facilitate an incredible persistence and availability of information, but given the Internet's adolescence, all of the information simply isn't there yet. I find that in some ways my mind has evolved to this new way of the thinking, relying on the information's existence and availability, so much so that it's almost impossible to conclude that the information isn't findable because it just isn't online.

The Web has also enabled amazing dynamic visualizations, where an ideal presentation of information is constructed — a table of comparisons or a data-enhanced map, for example. These visualizations — be it news from around the world displayed on a globe or a sortable table of airfares — can greatly enhance our understanding of the world or our sense of opportunity. We can understand in an instant what would have taken months to create just a few short years ago. Yet, the Internet's lack of structure means that it is not possible to construct these types of visualizations over any or all data. To achieve true automated, general understanding and visualization, we will need much better machine learning, entity extraction, and semantics capable of operating at vast scale.

On that note — and in terms of future Internet innovation, the important question may not be how the Internet is changing how we think but instead how the Internet is teaching itself to think.

tom_standage's picture

Business Affairs Editor, The Economist; Author, The Edible History of the Humanity

The Internet has not changed the way I think. The old stone-age mental software still seems to be working surprisingly well in the 21st century, despite claims to the contrary. What the Internet has done, however, is sharpen my memory.

A quick search with a few well chosen keywords is usually enough to turn a decaying memory of a half-forgotten article, scientific paper or news item into perfect recall of the information in question. Previously, these things at the penumbra of recollection could only be recovered with a great deal of effort or luck. The Internet has, in effect, upgraded my memory of such marginal items from haphazard and partial to reliable and total. This means I can swim freely through the Internet's vast oceans of information, safe in the knowledge that any connections between items that subsequently occur to me can still be made. (My own work as a journalist and author is based on making connections in this way, but the same is true for many other information workers, a category that encompasses a growing fraction of the workforce.)

This is useful now, but I expect it to become much more useful as I get older and my memory starts to become less reliable — moving more of the information that passes through my mind into that penumbral region. Indeed, I am reminded of the impact that eyeglasses had after their development in the late 13th century (though my recollection of the details was sketchy until I, ahem, asked the Internet).

As Giordano of Pisa noted in 1306, "It is not twenty years since there was discovered the art of making spectacles that help one see well, an art that is one of the best and most necessary in the world." Eyeglasses doubled the useful working life of scribes and skilled craftsmen who were otherwise liable to suffer from farsightedness (presbyopia) from the age of around 40. The historian David Landes has suggested that this use of technology overcame what had previously been regarded as an unavoidable human limitation then spurred further innovations of a similar nature, such as the development of fine optical instruments and precision machine tools.

Perhaps the same will be true of the way the Internet enhances our mental faculties in the years to come.

ai_weiwei's picture

Artist; Curator; Architectural Designer (The Bird's Nest); Cultural And Social Commentator; Activist

I only think on the Internet anymore. My thinking is now divided into on the net and off the net. If I'm not on the net, I don't think that much; when I'm on the net, I start to think. In this way, my thinking becomes always part of something else.

terrence_j_sejnowski's picture

Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Investigator, Howard Hughes Medical Institute; Co-author (with Patricia Churchland), The Computational Brain

What is the impact of spending hours each day in front of a monitor, surfing the Internet and playing games?  Brains are highly adaptable and experiences have long-term effects on the brain's structure and function. You are aware of some of the changes and call it your memory, but this is just the tip of the iceberg. We are not aware of more subtle changes, which nonetheless can affect your perception and behavior. These changes occur at all levels of your brain, from the earliest perceptual levels to the highest cognitive levels.

Priming is a dramatic example of unconscious learning, in which a brief exposure to an image or a word can affect how you respond to the same image or word, even in degraded forms, many months later. In one experiment, the outlines of animals and other familiar objects were viewed briefly and 17 years later the subjects could still identify the animals and objects above chance levels from versions in which half the outlines were erased. Some of the subjects did not remember participating in the original experiment. With conceptual priming, an object like a table can prime the response to a chair. Interestingly, priming decreases reaction times and is accompanied by a decrease in brain activity — it becomes faster and more efficient.

Brains, especially youthful ones, have an omnivorous appetite for information, novelty and social interaction, but it is less obvious why we are so good at unconscious learning. One advantage is that it allows the brain to build up an internal representation of the statistical structure of the world, whether it is the frequency of neighboring letters in words or the textures, forms and colors that make up images. Brains are also adept at adapting to sensorimotor interfaces. We first adapted to clunky keyboards, then to virtual pointers to virtual files, and now to texting with fingers and thumbs. As you become an expert at using it, the Internet, as with other tools, becomes an extension of your brain.

Are the changes occurring in your brain as you interact with the Internet good or bad for you?  Adapting to the touch and feel of the Internet makes it easier to extract information, but a better question is whether the changes in your brain will improve your fitness. There was a time, no long ago, when the heads of corporations did not use the Internet because they never learned to type, but they are going extinct and have been replaced with more Internet savvy managers.

Gaining knowledge and skills should benefit survival, but not if you spend all of your time immersed in the Internet. The intermittent rewards can become addictive, hijacking your dopamine neurons that predict future rewards. The Internet, however, has not been around long enough, and is changing too rapidly, to know what the long-term effects will be on brain function. What is the ultimate price for omniscience?

victoria_stodden's picture

Associate Professor of Information Sciences, University of Illinois at Urbana-Champaign

My title quotes Richard Feynman, and I am using his words to express how the Internet is providing not only information about our world, but also making available the means to understand it in a deep sense. The increased use of the computer in scientific research, from simple data analysis to simulations, means the ability to recreate and verify facts for oneself is very real, as scientists can release the complete software environment and data required to reproduce their results on the Internet. The Internet is opening this possibility to society at large for the first time. If our home computing power or disk space is insufficient, the Internet connects us to massive computing power such as the Teragrid or the cloud. We are posed to empower our own decision making through Internet-based verification of what we believe, important for self-determination but also for the validity of the computational results themselves. The result is a change in how I expect to understand the world.

Data analysis has risen as an intellectual force of its own, with implications for how we accept new knowledge as facts. In 1962 John Tukey first proposed data analysis as a field in its own right and split the field of statistics in two. At that time, statistics was synonymous with mathematical analysis and the Information Age was only just beginning. Tukey foresaw the coming data deluge and that the traditional machinery of mathematical statistics, such as hypothesis testing and confidence statements, had relatively little to offer for these new problems. There was an enormous amount of analysis to be done on vast amounts of data, and insisting on mathematics ran the risk of missing important findings. Now, data analysis is presenting challenging mathematical questions and we are running that same risk in reverse.

When awash in data it is common to use the following three-step investigative method: a new phenomenon is found in the data, followed by an analysis strategy justified on heuristic grounds, and then some computational examples of apparent success are provided. This approach makes it nearly impossible to derive the deeper intellectual understanding that the mathematical framework is geared to uncover. Our basic tools of modern data analysis, from regression to principal components, were developed by scientists working squarely in the mathematical tradition, and are based on theorems and analysis. As the Internet facilitates a national hobby of data analysis, our thinking about scientific discovery is no longer typically in the intellectual tradition of mathematics. This tradition, and the area of my training, defines a meaningful investigation as involving a formal definition of the phenomenon of interest, stated carefully in a mathematical model, and use of a strategy for analysis that follows logically from the model. It is accompanied at every step by efforts to show how the opportunity for error and mistakes has been minimized. As data analysts we must have the same high standards for transparency in our findings, and consequently I am pushing my thinking toward deeper intellectual rigor, more in line with the mathematical tradition and less in line with the data analysis tradition so facilitated by the Internet.

Mathematics has been developing responses to the ubiquity of error for hundreds of years, resulting in formal logic and the mathematical proof. Computation is similarly highly error-prone, but recent enough to still be developing equivalent standards of openness and collective verification. An essential response is reproducibility of results: the release of code and data that generated the computational findings we'd like to consider as a contribution to society's stock of knowledge. This subjects computational research to the same standards of openness as filled by the role of the proof in mathematics.

The Internet has changed how I think about science, and how to identify it. Today most computational results aren't accompanied by their underlying code and data, and my opening description of being able to recreate results for oneself is not commonplace. But I believe this will become typical - the draw of verifying what we know for ourselves and being less reliant on the conclusions of others has remained evident in our long search for truth about our world. This seems a natural evolution from a state of knowledge derived from mystical sources with little ability to question and verify, through a science-facing society still with an epistemological gulf between scientist and non-scientist. Now, the Internet allows more of our understanding to seep from the ivory tower, closing that gulf and empowering us to know things for ourselves and changing our expectations about what it means to live in an open, data-driven, society.

frank_wilczek's picture

Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, Fundamentals

(Apology: The question "How has the Internet changed the way you think?" is a difficult one for me to answer in an interesting way; the truth is, I use the Internet as an appliance, and it hasn't profoundly changed the way I think, at least not yet. So I've taken the liberty of interpreting the question more broadly, in the form "How should the Internet, or its descendants, affect how people like me think?")

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, to sit down to the slates, and to say to each other (with a friend as witness, if they liked): "Let us calculate." — Leibniz (1685)

Clearly Leibniz was wrong here, for without disputation philosophers would cease to be philosophers. And it is difficult to see how any amount of calculation could settle, for example, the question of free will. But if we replace, in Leibniz' visionary program, "sculptors of material reality" for "philosophers", then we arrive at an accurate description of an awesome opportunity — and an unanswered challenge — that faces us today. This opportunity began to take shape roughly eighty years ago, as the equations of quantum theory reached maturity.

The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. — P. A. M. Dirac (1929)

Much has happened in physics since Dirac's 1929 declaration. Physicists have found new equations that reach into the heart of atomic nuclei. High-energy accelerators have exposed new worlds of unexpected phenomena and tantalizing hints of Nature's ultimate beauty and symmetry. Thanks to that new fundamental understanding we understand how stars work, and how a profoundly simple but profoundly alien fireball evolved into universe we inhabit today. Yet Dirac's bold claim holds up; while the new developments provide reliable equations for smaller objects and more extreme conditions than we could handle before, they haven't changed the rules of the game for ordinary matter under ordinary conditions. On the contrary, the triumphant march of quantum theory far beyond its original borders strengthens our faith in its soundness.

What even Dirac probably did not foresee, and what transforms his philosophical reflection of 1929 into a call to arms today, is that the limitation of being "much too complicated to be soluble" could be challenged. With today's chips and architectures, we can start to solve the equations for chemistry and materials science. By orchestrating the power of billions of tomorrow's chips, linked through the Internet or its successors, we should be able to construct virtual laboratories of unprecedented flexibility and power.

Instead of mining for rare ingredients, refining, cooking, and trying various combinations scattershot, we will explore for useful materials more easily and systematically, by feeding multitudes of possibilities, each defined by a few lines of code, into a world-spanning grid of linked computers.

What might such a world-grid discover? Some not unrealistic possibilities: friendlier high-temperature superconductors, that would enable lossless power transmission, levitated supertrains, and computers that aren't limited by the heat they generate ; super-efficient photovoltaics and batteries, that would enable cheap capture and flexible use of solar energy, and wean us off carbon burning; super-strong materials, that could support elevators running directly from Earth to space.

The prospects we can presently foresee, exciting as they are, could be overmatched by discoveries not yet imagined. Beyond technological targets, we can aspire to a comprehensive survey of physical reality's potential. In 1964, Feynman posed this challenge:

Today, we cannot see whether Schrodinger's equation contains frogs, musical composers, or morality — or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way. — R. P. Feynman (1964)

How far can we see today? Not all the way to frogs or to musical composers (at least not good ones), for sure. In fact only very recently did physicists succeed in solving the equations of quantum chromodynamics (QCD) to calculate a convincing proton, by using the fastest chips, big networks, and tricky algorithms. That might sound like a paltry beginning, but it's actually an encouraging show of strength, because the equations of QCD are much more complicated than the equations of quantum chemistry. And we've already been able to solve those more tractable equations well enough to guide several revolutions in the material foundations of microelectronics, laser technology, and magnetic imaging. But all these computational adventures, while impressive, are clearly warm-up exercises. To make a definitive leap into artificial reality, we'll need both more ingenuity and more computational power.

Fortunately, both could be at hand. The [email protected] project has enabled people around the world to donate their idle computer time to sift radio waves from space, advancing the search for extraterrestrial intelligence. In connection with the Large Hadron Collider (LHC) project, CERN laboratory — where, earlier, the World Wide Web was born — is pioneering the GRID computer project, a sort of Internet on steroids, that will allow many thousands of remote computers and their users to share data and allocate tasks dynamically, functioning in essence as one giant brain. Only thus can we cope — barely! — with the gush of information that collisions at the LHC will generate. Projects like these are the shape of things to come.

Chess by pure calculation in 1958, and rapidly became more capable, beating masters (1978), grandmasters (1988), and world champions (1997). In the later steps, a transition to "massively" parallel computers played a crucial role. Those special-purpose creations are mini-Internets (actually mini-GRIDs), networking dozens or a few hundred ordinary computers. It would be an instructive project, today, to set up a [email protected]-style network, or a GRID client, that could beat the best standalones. Players of this kind, once created, would scale up smoothly to overwhelming strength, simply by tapping into ever larger resources.

In the more difficult game of calculating quantum reality we, with the help of our silicon friends, presently play like weak masters. We know the rules, and make some good moves, but we often substitute guesswork for calculation, we miss inspired possibilities, and we take too long doing it. To do much better we'll need to make the dream of a world-GRID into a working reality. We'll need to find better ways of parceling out subtasks in ways that don't require intense communication, better ways of exploiting the locality of the underlying equations, and better ways of building in physical insight, to prune the solution space. These issues have not received the attention they deserve, in my opinion. Many people with the requisite training and talent feel it's worthier to discover new equations, however esoteric, than to solve equations we already have, however important their application.

People respond to the rush of competition and the joy of the hunt. Some well-designed prizes for milestone achievements in the simulation of matter could have a big impact, by focusing attention and a bit of glamour toward this tough but potentially glorious endeavor. How about, for example, a prize for calculating virtual water that boils at the right temperature?

robert_shapiro's picture

Professor Emeritus of Chemistry and Senior Research Scientist, New York University; Author, Planetary Dreams

The Internet has made it far easier for professionals to access and search the scientific literature. Unfortunately, it has also increased the chances that we will lose part or all of that literature.

When I was young, I imagined that everything I wrote would be preserved forever. Future biographers would seek out every letter, diary and memorandum to capture the essence of my creativity. My first laboratory notebook still captured the same emotions. On page one I had printed, very legibly, the following preface: "To Posterity: This volume contains the authentic record of ingenious and original chemical research conducted by Robert Shapiro, currently a graduate student of organic chemistry at Harvard University."

Reality gradually whittled down my grandiosity, and I recognized that my published papers had the best chance of survival. The New York University library carried bound journals that dated from the 19th century, and the articles thay contained had obviously outlived their authors. As the number of my own published works grew, curiosity chose me to select one of them and track its impact. I deliberately picked one of minor importance.

A generation ago, a persistant PhD student and I had failed in an effort to synthesize a new substance of theoretical interest. We had however prepared some other new compounds and improved some methods, so I wrote a paper that was published in 1969 in The Journal of Organic Chemisty. Had our results ever mattered to anyone? Using new computer-driven search tools, I could quickly check whether it had had ever been noticed. To my surprise, I found that 11 papers and some patents had cited our publication, up to 2002. In one instance, our work provided a starting point for the preparation of new tranquilizers. I imagined that in the distant future, other workers might pull the appropriate volume off a library shelf and find my work to be some help. I did not forsee that such bound volumes might no longer exist.

The Journal of Organic Chemistry started in 1936, and continues up to the present. Its demands on library shelf space have increased over time: the first volume contained only 583 pages, while the 2009 edition had 9680. The arrival of the Internet rescued libraries from the space crisis created by the proliferation of new journals and the vast increase in the size of existing ones. Many paper subscriptions were replaced by electronic ones, and past holdings were converted to digital form. It is not hard to imagine a future time when paper copies of the scientific literature will no longer exist. Many new journals are appearing only in digital form.

This conversion has produced many benefits for readers. In the past I had to leave my office, ride an elevator, walk several blocks, take another elevator, and make my way through a maze of shelves to find a paper that I needed. Occasionally, the issue I wanted was being used by someone else or had been misplaced, and I had traveled in vain. Now I can bring most papers that I want onto a computer screen in my office or at home in a matter of minutes. I can store the publication in my computer, or print out a copy if I wish. But with this gain in the accessibility of the literature of science has come an increase in its vulnerability.

Materials that exist in one or a few copies are inherently at greater risk than those that are widely distributed. A Picasso painting might be destroyed but the Bible will survive. Alexander Stille in The Future of the Pastreported that the works of Homer and Virgil survived from antiquity because their great popularity lead them to be copied and recopied. On the other hand, only 9 of Sophocles 120 plays have survived. Before the Internet came into play, I could take pride that my each of my papers was present in hundreds or thousands of libraries across the globe. Its survival into the future was enhanced by the protection afforded by multiple copies. The same applies, of course to the remainder of the scientific literature.

Thousands of paper copies of the literature have now been replaced by a few electronic records stored in computers. Furthermore, the storage medium is fragile. Some paper manuscripts have survived for centuries. The lifetimes of the various discs, drives and tapes currently used for digital storage are unknown, but are commonly estimated in decades. In some cases, works available only in electronic form have disappeared much more rapidly for another reason — lack of maintenance of the sites. One survey found that 12% of the Internet addresses cited in three prestigious medical and scientific journals were extinct two years after publication.

Such difficulties are unlikely to affect prestigious sources such as the Journal of Organic Chemistry. But material stored only on the Internet is far more vulnerable to destruction than the same material present in multiple paper copies. Electrical breakdown can disrupt access for a time, while cyberterrorism, civic disturbances, war and a variety of natural catastrophes could destroy part ar all of the storage system, leading to the irretrievable loss of sections of the scientific literature. Anton Zeilinger wrote in a previous edition of this series that a nuclear explosion outside the earth's atmosphere would cause all computers, and ultimately society, to break down.

How has this changed my thinking? I no longer write with the expectation of immortality in print. I am much more tempted to contribute to Internet discussion forums, blogs, and media which may not persist. I seek my reward from the immediate response that my efforts may bring, with little thought to the possibility that some stranger may see my words centuries from now, and wonder about the life that was led by the person who wrote them.

nassim_nicholas_taleb's picture

Distinguished Professor of Risk Engineering, New York University School of Engineering ; Author, Incerto (Antifragile, The Black Swan...)

I used to think that the problem of information is that it turnshomo sapiensinto fools — we gain disproportionately in confidence, particularly in domains where information is wrapped in a high degree of noise (say, epidemiology, genetics, economics, etc.). So we end up thinking that we know more than we do, which, in economic life, causes foolish risk taking. When I started trading, I went on a news diet and I saw things with more clarity. I also saw how people built too many theories based on sterile news, the fooled by randomness effect. But things are a lot worse. Now I think that, in addition, the supply and spread of information turns the world into Extremistan (a world I describe as one in which random variables are dominated by extremes, with Black Swans playing a large role in them). The Internet, by spreading information, causes an increase in interdependence, the exacerbation of fads (bestsellers likeHarry Potter and runs on the banks become planetary). Such world is more "complex", more moody, much less predictable.

So consider the explosive situation: more information (particularly thanks to the Internet) causes more confidence and illusions of knowledge while degrading predictability.

Look at this current economic crisis that started in 2008: there are about a million persons on the planet who identify themselves in the field of economics. Yet just a handful realized the possibility and depth of what could have taken place and protected themselves from the consequences. At no time in the history of mankind have we lived under so much ignorance (easily measured in terms of forecast errors) coupled with so much intellectual hubris. At no point have we had central bankers missing elementary risk metrics, like debt levels, that even the Babylonians understood well.

I recently talked to a scholar of rare wisdom and erudition, Jon Elster, who upon exploring themes from social science, integrates insights from all authors in the corpus of the past 2500 years, from Cicero and Seneca, to Montaigne and Proust. He showed me how Seneca had a very sophisticated understanding of loss aversion. I felt guilty for the time I spent on the Internet. Upon getting home I found in my mail a volume of posthumous essays by bishop Pierre-Daniel Huet called Huetiana, put together by his admirers c. 1722. It is so saddening to realize that, being born close to four centuries after Huet, and having done most of my reading with material written after his death, I am not much more advanced in wisdom than he was — moderns at the upper end are no wiser than their equivalent among the ancients; if anything, much less refined.

So I am now on an Internet diet, in order to understand the world a bit better — and make another bet on horrendous mistakes by economic policy makers. I am not entirely deprived of the Internet; this is just a severe diet, with strict rationing. True, technologies are the greatest things in the world, but they have way too monstrous side effects — and ones rarely seen ahead of time. And since spending time in the silence of my library, with little informational pollution, I can feel harmony with my genes; I feel I am growing again.

ian_wilmut's picture

Chair of Reproductive Biology, Director Scottish Centre for Regenerative Medicine, University of Edinburgh; Author, After Dolly

Use of the Internet has not changed the way that I think, but it is making a unique contribution by providing me with immediate and convenient access to an extraordinary range of ideas and information. This development can be considered as a natural extension to the sequence that began with tablets of clay, continued through papyrus, parchment, handwritten manuscripts on paper to the recent mass produced books printed on paper. Happily the Internet provides us with access to many of these earlier forms of the written word as well as to electronic communications.

Access to information and ideas has always been important for both personal development and progress of a community or nation. As a school boy, when I first became interested in facts and ideas my family were living in an industrial part of the north of England and at that time I made great use of a public library. The library was part of an industrial village established by a philanthropic entrepreneur who made his money by importing Alpacas' cashmere-like fleece and weaving fine clothes. Alpacas are members of the camelid family found in the Andes of Peru and Chile. The village, which is now a World Heritage Site is Saltaire, named after the entrepreneur Sir Titus Salt. He provided not only houses, a hospital, but schools and a technical college, and the library. I took it for granted that libraries which provided access to books, most of which could be borrowed and taken home, were available everywhere. This is still not the case, but in the near future the Internet may provide an equivalent opportunity for people everywhere.

Whereas libraries have been established in most major societies, it is only in the recent past that they have been made generally available to ordinary citizens. One of the earliest libraries for which records remain is the Great Library of Alexandria in Egypt which was founded around 300BC by pharaoh Ptolemy I. It grew to hold several hundred thousand scrolls, some of which are said to have been taken from boats that happened to dock at Alexandria while carrying out their trade.

The library contributed to the establishment of Alexandria as a major seat of learning. Sadly the library was destroyed by fire. Never the less it represented a particular landmark in the development of the concept of a library as a collection of books to provide a reservoir of knowledge, that should be staffed by specific keepers whose tasks included expansion of the collection. Other similar libraries were established during this period, including those at Ephesus in Turkey and Sankore in Timbuktu.

During the period of the Roman Empire wealthy and influential people continued the practice of establishing libraries, most of which were open only to scholars with the appropriate qualifications. A survey in 378AD identified 29 libraries in Rome, but as the Empire declined the habit of establishing and maintaining libraries was lost. The development of monasteries provided a renewed stimulus for learning. They amassed book collections and introduced the habit of exchanging volumes. Recognizing the importance of learning the Benedictine rules required that monks spent specified periods of time reading. As Europe emerged from the Dark Ages wealthy families again began to collect books and then donate their libraries to seats of learning in places such as Florence, Paris, Vatican City and Oxford.

All of these libraries depended upon the copying of text by hand and it was only the development of printing by Gutenberg in the 1400s that production of books was transformed they were much more readily available. During the period 1400 to 1800 there was an extraordinary expansion of libraries, by universities and nations. Some of these were named after major benefactors, such as the Bodlean Library in Oxford and the library donated by the Massachusetts clergyman John Harvard, after whom the university is named. In the United States the Library of Congress was founded in 1800 and after a fire during the War of Independence its stock was replenished by the purchase of the collection that had been amassed by Thomas Jefferson. The Library of Congress now claims to be the largest library in the world with more than 150 million items.

It was also during this period that public libraries became more common and books became more generally available for the first time. In some cases subscriptions were used to purchase books, but there was no charge for subsequent loans. One such was the Library Company of Philadelphia established by a group that included Benjamin Franklin in 1731.

The oldest surviving free reference library in the United Kingdom, Chetham's, was established in Manchester in 1653. Some 200 years later Karl Marx and Frederick Engels carried out research for Das Kapital in this library. It was at this time that the UK parliament passed an Act to promote the formation of Public Libraries. In the United States the first free public library was only formed in 1833, in New Hampshire. The Scots born entrepreneur Andrew Carniegie went on to build more than 1,700 public libraries in the US between 1881 and 1919. These libraries were the first to make large numbers of books available to the general public.

Of course books are only valuable to those who have access to them, can read and are encouraged to do so. Often reading was associated with religion as knowledge of the sacred scripture was important. In England around 1200 the ability to read a particular Psalm entitled a defendant to be tried in an ecclesiastical court, which was typically more lenient than a civil court. In some places funds were allocated specifically to teach people to read the scriptures, but this provision was not always available universally. At the time of the civil war in the US owners were prohibited from teaching their slaves to read and write. As recently as 1964 the Brazilian educator Paulo Freire was arrested and expelled for daring to teach peasants to read.

Universal access to the Internet could have an exceptionally important contribution to make to future political developments. Access to the Internet would then provide the opportunity to everyone anywhere in the world to obtain a great deal of information on any subject that they choose. Knowledge accumulated over centuries of human experience is an important counter to fashions of the moment communicated through commercial mass media. It is hard to imagine that making each of us aware of the circumstances and beliefs of people in other parts of the world can do anything but good. We would surely be more likely to assist countries such as Afghanistan and Iraq to form liberal democracies by helping to provide education, training, employment and so wealth and greater understanding than by military take over, which inevitably causes a very large numbers of civilian casualties and a great deal of damage.

There is one cautionary note. Texts of any kind, be they on parchment or available through electronic systems, are only as useful as they are accurate. In the days when books were prepared by hand the accuracy of scribes was recognized as being of paramount importance. In a rather different way, but of equal importance, we depend upon the rigor of the research done by those whose electronically reproduced articles we read.

michael_shermer's picture

Publisher, Skeptic magazine; Monthly Columnist, Scientific American; Presidential Fellow, Chapman University; Author, Heavens on Earth

In the 1980s I was a competitive bicycle racer, competing five times in the 3,000-mile nonstop transcontinental Race Across America, an event thatOutside magazine called "the world's toughest sporting event." I felt that the playing field was level because in a pure sport such as cycling (this was before the days of sophisticated doping programs) it doesn't matter what your last name is, what schools you attended, how much money your parents have, which country clubs you belong to, your politics, religion, or socio-economic status, or any other social conventions. It only matters how fast you can pedal your bike. Full stop. Cycling is as close to a pure meritocracy as there is.

In my intellectual pursuits, however, I never felt that the playing field was level. In academia especially, but in other careers as well (most notably politics and corporate business), your name, money, connections, social standing, religion, and especially which institutions you are affiliated with do seem to matter…a lot. Pure skill and talent, while important, often seem to play second fiddle in the orchestral arrangement of society. The Internet is changing this.

Thanks to the Internet, for the first time in my life I feel that I have a chance to compete on a level playing field. My academic background is embarrassing compared to that of most successful intellectuals. My public high school education was so abysmal that I had to attend to a community college in California for two years before matriculating at the (then) reputationless Pepperdine University. I scraped together a master's degree through the second-tier California State University system, and finally gave up hope for an intellectual life and raced bikes for a decade. By the time I earned a Ph.D. from the distinctly non-elitist Claremont Graduate University, I discovered there were next to no jobs, especially for someone with an intellectual pedigree such as mine. Since teaching as an adjunct professor is no way to make a living (literally), I founded the Skeptics Society andSkeptic magazine just as the Internet was getting legs in the early-1990s.

Starting with no money, no backers, and no affiliation with elite institutions, the Internet made it possible for us to succeed by making knowledge accessible and searchable to me and my editors and writers on a scale never previously available. The intellectual playing field was being leveled and the Internet changed the way I think about the very real possibility of fairness and opportunity in a world that has for too long been rigged to favor the elite.

Who needs brick and mortar libraries when knowledge is available at fingertips' notice? Who needs acceptance into elite universities when the same knowledge is searchable by anyone from anywhere? Who needs access to exclusive clubs when knowledge is no longer the province of just the privileged? We're not all the way there yet, but the Internet is leveling the knowledge playing field by democratizing access to information.

This is real power, and I feel that power as never before.

timothy_taylor's picture

Jan Eisner Professor of Archaeology, Comenius University in Bratislava; Author, The Artificial Ape

The first bit is wholly unsurprising: the Internet was designed for people like me, by people like me, most of them English speakers. Fundamentally reflecting western, rationalist, objective, data-organizing drives, the Internet simply enhances my ability to think in familiar ways, letting me work longer, more often, with better focus, free from the social tyranny of the library and the uncertainty of postmen. The Internet has changed what I think, however — most notably about where the human race is now headed. From a prehistorian's perspective, I judge that we have been returned to a point last occupied at the time of our evolutionary origin. This is what I mean:

When the first stone tool was chipped, over two millon years ago, it signalled a new way of being. The ancestral community learned to make flint axes, and those first artificial objects, in turn, critically framed a shared, reflective consciousness that began to express itself in language. An axe could be both made and said, used and asked for. The invention of technology brought the earliest unitary template for human thought into being. It can even be argued that it essentially created us as characteristically human.

What happened next is well known: technology accelerated adaptation. The original ancestral human culture spread out across continents and morphed into cultures, plural — myriad ways of being. While isolated groups drifted into ever greater idiosyncracy, those who found themselves in competition for the same resources consciously strove to differentiate themselves from their neighbours. This ever deepening cultural specificity facilited the dehumanization of enemies that successful warfare, driven by jealously guarded technological innovation, required.

Then reunification began, starting five thousand years ago, with the development of writing — a technology that allowed the transcription of difference. War was not over, but alien thoughts did begin to be translated, at first very approximately, across the boundaries of local incomprehension. The mature Internet marks the completion of this process, and thus the reemergence of a fully contiguous human cultural landscape. We now have the same capacity for being united under a common language and shared technology that our earliest human ancestors had.

So, in a crucial sense, we are back at the beginning, returned into the presence of a shared template for human thought. From now on, there are vanishingly few excuses for remaining ignorant of objective scientific facts, and ever thinner grounds for cultivating hatred through willful failure to recognize our shared humanity. Respecting difference has its limits, however: the fact of our knowing that there is a humanity to share means we must increasingly work towards agreeing common moral standards. The Internet means that there is nowhere to hide and no way to shirk responsibility when the whole tribe makes informed decisions (as it now must) about its shared future.

clay_shirky's picture

Social & Technology Network Topology Researcher; Adjunct Professor, NYU Graduate School of Interactive Telecommunications Program (ITP); Author, Cognitive Surplus

The Internet has been in majority use in the developed world for less than a decade, but we can already see some characteristic advantages (dramatically improved access to information, very large scale collaborations) and disadvantages (interrupt-driven thought, endless distractions.) It's tempting to try to adjudicate the relative value of the network on the way we think by deciding whether access to Wikipedia outweighs access to tentacle porn or the other way around.

Unfortunately for us, though, the intellectual fate of our historical generation is unlikely to matter much in the long haul. It is our misfortune to live through the largest increase in expressive capability in the history of the human race, a misfortune because surplus always breaks more things than scarcity. Scarcity means valuable things become more valuable, a conceptually easy change to integrate. Surplus, on the other hand, means previously valuable things stop being valuable, which freaks people out.

To make a historical analogy with the last major increase in the written word, you could earn a living in 1500 simply by knowing how to read and write. The spread of those abilities in the subsequent century had the curious property of making literacy both more essential and less professional; literacy became critical at the same time as the scribes lost their jobs.

The same thing is happening with publishing; in the 20th century, the mere fact of owning the apparatus to make something public, whether a printing press or a TV tower, made you a person of considerable importance. Today, though, publishing, in its sense of making things public, is becoming similarly de-professionalized; YouTube is now in the position of having tostop 8 year olds from becoming global publishers of video. The mere fact of being able to publish to a global audience is the new literacy, formerly valuable, now so widely available that you can't make any money with the basic capability any more.

This shock of inclusion, where professional media gives way to participation by two billion amateurs (a threshold we will cross this year) means that average quality of public thought has collapsed; when anyone can say anything any time, how could it not? If all that happens from this influx of amateurs is the destruction of existing models for producing high-quality material, we would be at the beginning of another Dark Ages.

So it falls to us to make sure that isn't all that happens.

To the question "How is Internet is changing the way we think?", the right answer is "Too soon to tell." This isn't because we can't see some of the obvious effects already, but because the deep changes will be manifested only when new cultural norms shape what the technology makes possible.

To return to the press analogy, printing was a necessary but not sufficient input to the scientific revolution. The Invisible College, the group of natural philosophers who drove the original revolution in chemistry in the mid-1600s, were strongly critical of the alchemists, their intellectual forebears, who for centuries had made only fitful progress. By contrast, the Invisible College put chemistry on a sound scientific footing in a matter of a couple of decades, one of the most important intellectual transitions in the history of science. In the 1600s, though, a chemist and an alchemist used the same tools and had access to the same background. What did the Invisible College have that the alchemists didn't?

They had a culture of sharing. The problem with the alchemists had wasn't that they failed to turn lead into gold; the problem was that they failed uninformatively. Alchemists were obscurantists, recording their work by hand and rarely showing it to anyone but disciples. In contrast, members of the Invisible College shared their work, describing and disputing their methods and conclusions so that they all might benefit from both successes and failures, and build on each other's work.

The chemists were, to use Richard Foreman's phrase, "pancake people". They abandoned the spiritual depths of alchemy for a continual and continually incomplete grappling with what was real, a task so daunting that no one person could take it on alone. Though as schoolchildren, the history of science we learn is often marked by the trope of the lone genius, science has always been a networked operation.

In this we can see a precursor to what's possible for us today. Just as the Invisible College didn't just use the printing press as raw capability, but created a culture that used the press to support the transparency and argumentation science relies on, we have the same opportunity.

As we know from arXiv.org, the 20th century model of publishing is inadequate to the kind of sharing possible today. As we know from Wikipedia, post-hoc peer review can support astonishing creations of shared value. As we know from the search for Mersenne Primes, whole branches of mathematical exploration are now best taken on by groups. As we know from Open Source efforts like Linux, collaboration between loosely joined parties can work at scales and over timeframes previously unimagined. As we know from NASA clickworkers, groups of amateurs can sometimes replace single experts. As we know from Patients Like Me, patient involvement accelerates medical research. And so on.

The beneficiaries of the system where making things public was a privileged activity, whether academics or politicians, reporters or doctors, will complain about the way the new abundance of public thought upends the old order, but those complaints are like keening at a wake; the change they fear is already in the past. The real action is elsewhere.

The Internet's primary effect on how we think will only reveal itself when it affects the cultural milieu of thought, not just the behavior of individual users. The members of the Invisible College did not live to see the full flowering of the scientific method, and we will not live to see what use humanity makes of a medium for sharing that is cheap, instant, and global (both in the sense of 'comes from everyone' and 'goes everywhere.') We are, however, the people who are setting the earliest patterns for this medium. Our fate won't matter much, but the norms we set will.

Given what we have today, the Internet could easily become Invisible High School, with a modicum of educational material in an ocean of narcissism and social obsessions. We could, however, also use it as an Invisible College, the communicative backbone of real intellectual and civic change, but to do this will require more than technology. It will require that we adopt norms of open sharing and participation, fit to a world where publishing has become the new literacy.

max_tegmark's picture

Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute; President, Future of Life Institute; Author, Life 3.0

I have a love-hate relationship with the Internet. With procrastination just a click away, and a seductive Siren song in the form of new-mail pings, I find it challenging to stay focused on a single subject long enough to have real impact. Maintaining the Zen-like focus that is so crucial for doing science was easier back when the newspaper and the mail came only once per day. Indeed, as a part of an abstinence-based rehab program, I now try to disconnect completely from the Internet while thinking, closing my mail program and Web browser for hours,  much to the chagrin of colleagues and friends who expect instant response. To get fresh and original ideas, I typically need to go even further, and completely turn off my computer.

On the other hand, the Internet gives me more time for such Internet-free thinking by eliminating second millennium style visits to libraries and stores. The Internet also lets me focus my thinking on the research frontier rather than on reinventing the wheel. Had the Internet existed in 1922 when Alexander Friedmann discovered the expanding universe model, Georges Lemaître wouldn't have had to rediscover it five years later.

The Internet gives me not only traditionally available information faster (and sometimes faster than I can retrieve it from memory), but also previously unavailable information. With some notable exceptions, I find that "the truth, nothing but the truth, but maybe not the whole truth" provides a useful rule of thumb for news reporting, and I usually find it both easy and amusing to piece together what actually happened by pretending that I just arrived from Mars, and comparing a spectrum of Web sites from Fox News to Al Jazeera.

The Internet also affects my thinking by leaving me thinking about the Internet. What will it do to us? On the flip side, as the master of distraction, it seems to be further reducing our collective attention span from the depths to which television had brought it. Important issues fade from focus fast, and while many of humanity's challenges get more complicated, society's ability to pay attention to complex arguments dwindles. Sound bites and attack ads work well when the world has attention deficit disorder.

On the other hand, the ubiquity of information is clearly having positive impact in areas ranging from science and education to economic development. I think the essence of science is to think for oneself and question authority. I therefore delight in the fact that the Internet makes it harder to restrict information and block the truth. Once the cat is out of the bag and in the cloud, that's it. Today it's hard even for Iran and China to prevent information dissemination. Soviet-style restrictions on copying machines sound quaint today, and the only currently reliable censorship is not to allow the Internet at all, like in North Korea.

Love it or hate it, but free information will transform the world. Oft-discussed examples range from third world education to terrorist technology. As another example, suppose someone discovers and posts online a safe low-tech chemical process for mass-producing all-synthetic cocaine, THC or heroin from cheap and readily available chemicals, much like methamphetamine manufacturing today except safer and cheaper. This would trigger domestic drug production in industrialized countries that no government could stop, in turn slashing prices and potentially devastating both the revenue and the power of Colombian and Mexican drug cartels as well as the Taliban.

barry_c_smith's picture

Professor & Director, Institute of Philosophy School of Advanced Study University of London

The growth of the Internet has reversed previous assumptions: the private is now public; the local appears globally; information is entertainment; consumers turn into producers; everyone is an expert; and the socially isolated become part of an enormous community preferring the virtual to the real. What have all these changes brought about?

Initially, they appear empowering. Everyone can have their say, opinion is democratic; and at a time when natural resources are shrinking, and where environmental threats require us to limit our emissions, the Internet seems to be an ever expanding and almost limitless resource. Here, it seems, I discover a parallel world where neat models replace messy reality, where freedom reigns, where wrongs are righted, and where fates can be changed. I am cheered by the possibilities.

However, the truth is that the virtual world grows out of, and ultimately depends on, the one world whose inputs it draws on, whose resources it consumes, and whose flaws it inevitably inherits. I find everything there: the good, the bland, the important, the trivial, the fascinating and the off-putting. And just as there are crusading writers, and eye-witness reporters, there are also cyber lynch mobs, hate mailers and stalkers. As more of my information appears on the Net, more use is made of it, for good or for ill. Increasing Internet identity means increasing identity theft, and whatever I have encrypted, hackers will try to decode. So much so that governments and other organisations often restrict their most secure communications to older technologies, even sending scrolled messages in small capsules through pneumatic pipes. This, of course, fuels the suspicions of Internet conspiracy theorists.

Looking at what have I've gained, I now hear from a greater range of different voices, discover new talents with something to say: niche writers, collectors, musicians and artists. I have access to more books, journal articles, newspapers, tv programs, documentaries and films. Missed something live? It will be on the Web. The greatest proportion of these individuals and outputs were already offering something interesting or important to which the Internet gave worldwide access. Here we have ready-made content for the voracious Internet to consume and display.

But new media have emerged, too, whose content arose for, or on, the Internet: these include blogging, Wikipedia, and YouTube; along with new forms of shared communication, such as Facebook, Google Groups and Twitter. Will these new forms replace the ready-made contents? It's unclear. Amid the bread and circus element to the Internet here is a need for good quality materials, and a means to sort out the wheat from the chaff: garbage in, garbage out, as computer programmers say. It is our choice, some will say, and yet I find myself looking with sheer disbelief or ironic amusement at what people have chosen to put up on the Net. The greatest fascination is bloggers who rather knowingly provide alternative slices of life. Here we have diarists who desire to be intimate with everyone. Those with a distinctive voice and a good theme, have found a following, when worldwide word spreads, the result is usually a contract to publish their output, lightly edited, as a book, which in turn can be read on the Internet.

What of the new Web-dependent phenomena: open access and open source programming, virtual social networking, the co-construction of knowledge? All these are gains and reflect something hopeful: the collaborative effort of our joint endeavour; our willingness to share. The inclusive natures of these phenomena are encouraging. I want to join in and like the idea of making a modest contribution to a larger enterprise. But the new technologies let me witness their distancing and distorting influences: Internet fuelled fantasies where everyone can be a celebrity, or can live through their avatar in virtual reality, or develop alternative personalities in chat rooms — fantasies that someone, somewhere on the Internet is making money from.

How do I cope with the speeded up information age? The overload is overwhelming, but so is my desire to know and not to miss anything. I'm tempted to know a little bit about everything and look for pre-digested, concise, neatly formatted content from reliable sources. My reading habits have changed making me aware of how important well-packaged information has become. It's become necessary to consume thousands of abstracts from scientific journals, doing one's own fast search for what should be read in more detail. Debates seem to be decided at the level of abstracts. Repudiations signalled by the title and a hundred words. The real work, of course, goes on elsewhere but we want the Internet to brings us the results. This leaves me knowing less about more and more. At the same time I am exhilarated by the dizzying effort to make connections and integrate information. Learning is faster. Though the tendency to forge connecting themes can feel dangerously close to the search for patterns that overtakes the mentally ill. Time to slow down and engage in longer study.

The Internet shows me more and more about those who participate in it, but I worry lest I forget that not everything or everyone in the world has a home on the Internet. Missing are those who cannot read or write, who have no access to a computer, or who chose to remain disconnected. There is a danger of coming to think that what cannot be found on an Internet search doesn't exist, and that the virtual world is the world. It isn't. However bizarre and incredible the people populating the Internet are, they are still akin to me, people with knowledge of computers and their applications. Certainly, there is diversity and hierarchy, and vast domains of varied information, but nevertheless, except when Internet users turn their attention on the those who are excluded, or who exclude themselves, a mirror will be held up to those who sustain the information age, and it is only this part of the world I come to have scattered information about.

frank_tipler's picture

Professor of Mathematical Physics, Tulane University; Coauthor (with John Barrow), The Anthropic Cosmological Principle

The Internet first appeared long after I had received my Ph.D. in physics, and I was slow to use it. I had been trained in physical library search techniques: look up the subject in Science Abstracts (a journal itself now made defunct by the Internet), then go to the archived full article in the physical journal shelved nearby. Now I simply search the topics in the Science Citation Index (SCI), and then go to the journal article available online. I no longer have to go to the library; I can access the SCI and the online journals via the Internet.

These Internet versions of journals and Abstracts have one disadvantage at present: my university can afford only a limited window for the search. I can use the SCI only back ten years, and most e-journals have not yet converted their older volumes to online format, or if they have, my university can often not afford to pay for access to these older print journals.

So the Internet causes scientific knowledge to become obsolete faster than was the case with the older print media. A scientist trained in the print media tradition is aware that there is knowledge stored in the print journals, but I wonder if the new generation of scientists, who grow up with the Internet, are aware of this. Also, print journals were forever. They may have merely gathered dust for decades, but they could still be read by any later generation. I can no longer read my own articles stored on the floppy discs of the 1980's. Computer technology has changed too much. Will information stored on the Internet become unreadable to later generations because of data storage changes, and the knowledge lost?

At the moment the data is accessible. More importantly, the raw experimental data is becoming available to theorists like myself via the Internet. It is well known from the history of science that experimentalists quite often do not appreciate the full significance of their own observations. "A new phenomenon is first seen by someone who did not discover it," is one way of expressing this fact. Now that the Internet allows the experimenter to post her data, we theorists can individually analyze it.

Let me give an example from my own work. Standard quantum mechanics asserts that an interference pattern of electrons passing through a double slit must have a certain distribution as the number of electrons approaches infinity. However, this same standard quantum mechanics does not give an exact description of the rate at which the final distribution will be approached. Many-Worlds quantum mechanics, in contrast, gives us a precise formula for this rate of approach, since according to Many-Worlds quantum mechanics, physical reality is not probabilistic at all, but more deterministic than the universe of classical mechanics. (According to Many-Worlds quantum mechanics, the wave function measures the density of Worlds in the Multiverse rather than a probability.)

Experimenters — indeed, undergraduate students in physics — have observed the approach to the final distribution, but they have never tried to compare their observations with any rate of approach formula, since according to standard quantum mechanics there is no rate of approach formula. Using the Internet, I was able to find raw data on electron interference that I used to test the Many-Worlds formula. Most theorists can tell a similar story.

But I sometimes wonder if later generations of theorists will be able to tell a similar story. Discoveries can be made by analyzing raw data posted online today, but will this always be true? The great physicist Richard Feynman often claimed: "there will be no more great physicists." Feynman believed that great physicists where those scientists who looked at reality from a different point of view than other scientists. Feynman argued in Surely You're Joking Mr. Feynman that all of his own achievements were due, not to his higher-than-other-physicists I.Q., but to his having a 'different bag of tricks." Feynman thought the future generations of physicists would all have the same "bag of tricks," and consequently be unable to move beyond the consensus view. Everyone would think the same way.

The Internet is currently the great leveler: it allows everyone to have access to exactly the same information. Will this ultimately destroy diversity of thought? Or will the tendency of people to form isolated groups on the Internet preserve that all important diversity of thought, so that although scientists all have equal access in principle, there are still those who look at the raw data in a different way from the consensus?

laurence_c_smith's picture

Professor of Environmental Studies, Brown University; Author, Rivers of Power

I remember very well the day when the Internet began changing the way I think. It happened in the spring of 1993 in a drab, windowless computer lab at Cornell. One of my fellow graduate students (a former Microsoft programmer who liked to stay abreast of things) had drawn a crowd around his flickering UNIX box. I shouldered my way in, then became transfixed as his fingers flew over Xmosaic, the first widely available Web browser in the world.

Xmosaic was only months old. It had been written at the University of Illinois by an undergraduate student named Marc Andreessen (a year later he would launch Netscape, its multi-billion dollar successor) and Eric Bina at the National Center for Supercomputer Applications. Already there were some Web sites up and running. Urged on by his crowd's word-search suggestions ("Sex!" "Kurt Cobain!" "Landsat!"), my fellow student lifted the curtain on a new world of commerce, entertainment and scientific exchange in barely fifteen minutes. A sense that something important was happening filled the lab. By the next day everyone had Xmosaic up and running.

How has my thinking changed since that day in 1993? Like most everyone I've become both more addicted to information, and more informed. With so much knowledge poised instantly beneath my fingertips, I am far less tolerant of my own ignorance. If I don't know something, I look it up. Today I flit through dozens of newspapers a day when before I barely read one. Too many hours of my life are consumed in this way, and other tasks procrastinated, but I am perpetually educated in return.

I am now more economics-minded than before. In 1992 if I had to fly someplace I called the travel agent who worked around the corner and accepted whatever she said was a good fare. Today, I thrash competing search engines to shake the last nickel out of a plane ticket. Before shopping online I hunt and peck for secret discount codes. This superabundance of explicit pricing information has instilled in me an obsessive thriftiness that I did not possess before. Doubtless it has helped contribute to thousands of excellent travel agents losing their jobs, and even more hours of time wasted, in return for these perceived monetary savings.

The pace and scale of my branch of science have become turbocharged. Unlike before when scientific data were hard to get, expensive, and prized, my students and I now freely post or download enormous volumes at little or no cost. We ingest streaming torrents of satellite images and climate model simulations in near-real time; we post our own online for free use by unseen others around the planet. In a data-rich world, a new aesthetic of sharing, transparency, and collaboration has emerged to supplant the old one of data-hoarding and secretiveness. Earth science has become an extraordinarily exciting, vibrant and fast-advancing field because of this.

Perhaps the most profound change in my thinking is how the new ease of information access has allowed me to synthesize broad new ideas drawing from fields of scholarship outside my own. It took less than two years for me to finish a book identifying important convergent trends not only in climate science (my formal area of expertise) but globalization, population demographics, energy, political science, geography and law. While a synthesis of such scope might well have been possible without the light-speed world library of the Internet, I, for one, would never have attempted it.

Before 1993 my thinking was complacent, spendthrift, and narrow. Now, it is informed, tightfisted, and synthetic. I can't wait to see where it goes next.

fred_tomaselli's picture

The Internet has not so much changed my thinking as it has expanded my preexisting artistic sensibility. Like many collagist, I cobble together quilts of disparate information that rely on uncanny juxtapositions to create new meaning. Cut and paste has always been the way I think. I used to spend days in bookstores and libraries searching for raw images and information to be reorganized and repurposed into my pictures.

Now I sit in front of my computer and grab them out of the Internet hive mind that expands endlessly outwards, a giant, evolving global collage that participants edit to conform to their needs and sensibilities. This process of hunting and capturing reduces me to a pair of hungry eyes and two thinking hands. (My whole body is for later, for when I build my pictures analog-style.) When the image is finally assembled, it sings in the chorus of a million authors. I am the conductor and through me, this collective hums. The electricity overwhelms me. I'm no longer a rugged individualist.

There was a time, not that long ago, when the apostles of the coming digital age predicted the obsolescence of unique art objects. They forgot that some once believed that the emergence of photography would render paintings useless. As we now know, the emergence of photography actually helped free artists from the need to describe the world realistically, and this helped revivify painting and jumpstart modernism. From then on, artists could do anything they wanted, and they did. Photography caused all hell to break loose, and that hell and some new ones are now fighting it out in an info-cloud.

Now I can do more than I ever thought I wanted. The Internet has given me a new paintbrush that I can use towards the making of singular things. In this landscape of endless copies, a real thing, made by a person, with its repository of the creator's time and it's tactility, scale and surface quality is almost startling in its strangeness.

Growing up in the land of theme parks, I became aware at an early age that the unreal is the realist thing there is. Waterfalls without pumps and electricity? Impossible! A sublime without LSD? Who are you kidding? Experiencing all this made me want to make real things about my unreal world. Now I can capture banal elements of the shimmering digital mirage and fix them into place where they can become strange again.

Oh real, tangible things, is my love for you proof of my own obsolescence? I'm filled with nostalgia for the dying objects of the old economy. Over the years, I would occasionally draw on top of handmade, unique photograms. Now, the kind of photo paper that can withstand my scribbling has become extinct. I've also sporadically used the front page of the New York Times as a backdrop for collage and paint interventions. How long will it be before it too is no longer available? (Still, vinyl refuses to die. Maybe there is hope.) I used to be jealous of cultural forms that existed through an economy of copies. Books, newspapers, magazines, films and recordings offered a democratic way for consumers to pony up a tiny chunk of money that helped the author or enterprise survive and sometimes even prosper.

Now copies are worth even less than the paper they're not printed on. Despite the new economy, unique art objects seem to have maintained a semblance of monetary value. (For the time being at least.) While a few patrons have always supported a few artists, most art is still not worth much. In the future, I expect that we'll all be poor, but for the time being, value is now given to living humans doing real things, or real things made by living humans. (Well, all living humans except for poets. No one said the Internet was fair.)

I'm an information grazer. I've always felt comfortable with skidding across vast plains of data, connecting the dots wherever it feels right. The Internet mirrors the cross connectivity of my own mind — a mind, it should be noted, that has been hybridized by drugs and other consciousness altering activities. Aldous Huxley famously posited that to enable us to live, the brain and nervous system eliminates unessential information from the totality of our minds. Psychedelics, on the other hand, overwhelm our minds with the fullness of the world. In other words, information overload is just another way of being psychedelic. I can live with this. All good art experiences are inherently psychoactive. Art modifies perception and offers either a window or a mirror. Sometimes, if we're lucky, it does it all at the same time.

Huxley tells us that our minds are constantly editing down the world into manageable bits. The problem with the Internet is that the menu has gotten too big, too unwieldy and too full of lies and stupidity. Who can apprehend or trust it? For instance, if I search for "naked lady" I come up with 16,400,000 items in 0.18 seconds. Somewhere lies the perfect naked lady, but where is she? I get cranky and impatient. I know she's there somewhere and I want her now. I've become habituated to getting everything right away. I'm the editor who thinks he's in control, but my fingers on a keyboard have a tough time finding a few trees in this haystack of needles. Wherever I settle, I always suspect a better choice is just around the corner.

lee_smolin's picture

Physicist, Perimeter Institute; Author, Einstein's Unfinished Revolution

The Internet hasn't, so far, changed how we think. But it has radically altered the contexts in which we think and work.

The Internet offers a vast realm for distraction but then so does reading and television. The Internet is an improvement on television in the same way that Jane Jacob's bustling neighborhood sidewalk is an improvement on the dullness of suburbia. The Internet requires an active engagement and as a result it is full of surprises. You don't watch the Internet, you search and link. What is important for thought about the Internet is not the content, it is the new activity of being a searcher, with the world's store of knowledge and images at your fingertips.

The miracle of the browser is that it can show you any image or text from that storehouse. We used to cultivate thought, now we have become hunter gatherers of images and information. This speeds things up a lot but it doesn't replace the hard work in the laboratory or notebook which prepares the mind for a flash of insight. But it nonetheless changes the social situation of that mind. Scholars used to be more tied to the past through texts in libraries than to their contemporaries. The Internet reverses that by making each of our minds a node in a continually evolving network of other minds.

The Internet is also itself a metaphor for the emerging paradigm of thought in which systems are conceived as networks of relationships. To the extent that a Web page can be defined only by what links to it and what it links to, it is analogous to one of Leibniz's monads. But Web pages still have content, and so are not purely relational. Imagine a virtual world abstracted from the Internet by deleting all the content so that all that remained was the links. This is an image of the universe according to relational theories of space and time, it is also an image of the neural network in the brain. The content corresponds to what is missing in those model, it corresponds to what physicists and computer scientists have yet to understand about the difference between a mathematical model and an animated world or conscious mind.

Perhaps when the Internet has been soldered into our glasses or teeth, with the screen replaced by a laser making images directly on our retinas, there will be deeper changes. But even in its present form the Internet has transformed how we scientists work.

The Internet flattens communities of thought. Blogs, email and Internet data bases put everyone in the community on the same footing. There is a premium on articulateness. You don't need a secretary to maintain a large and varied correspondence.

Since 1992 research papers in physics are posted on an Internet archive, arxiv.org, which has a daily distribution of just posted papers and complete search and cross reference capabilities. It is moderated rather then refereed; and the refereed journals now play no role in spreading information. This gives a feeling of engagement and responsibility, once you are a registered member of the community you don't have to ask anyone's permission to publish your scientific results.

The Internet delocalizes your community. You participate from where-ever you are. You don't need to travel to see or give talks and there is less reason to go into the office. Travel is no reason not to stay current reading the latest papers and blog postings.

It used to be that physics preprints were distributed by bulk mail among major research institutes and there was a big advantage to being at a major university in the United States; every one else was working with a handicap of being weeks to months behind. The increasing numbers and influence of scientists working in Asia and Latin America and the dominance of European science in some fields is a consequence of the Internet.

The Internet synchronizes the thinking of global scientific communities. Everyone gets the news about the new papers at the same time every day. Gossip spreads just as fast on blogs. Announcements of new experimental results are video-cast through the Internet as they happen.

The Internet also broadens communities of thought. Obscure thinkers that you had to be introduced to, who published highly original work sporadically and in hard to find places, now have Web pages and post their papers along side everyone else's. An it creates communities of diverse thinkers who would not otherwise have met, like the one we celebrate every year at this time when we answer theEdge Annual Question.

john_tooby's picture

Founder of field of Evolutionary Psychology; Co-director, Center for Evolutionary Psychology, Professor of Anthropology, UC Santa Barbara

Obliterating whole lineages — diatoms and dinosaurs, corals and crustaceans, ammonites and amphibians — shockwaves from the Yucatán impact 65 million years ago ripped through the intricate interdependencies of the planetary ecosystem, turning blankets of life into shrouds in one incandescent geological instant. Knocking out keystone species and toppling community structures, these shifts and extinctions opened up new opportunities, inviting avian and mammalian adaptive radiations and other bursts of innovation that transformed the living world — and eventually opening the way for our placenta-suckled, unprecedentedly luxuriant brains.

What with one thing and another, now here we are: The Internet and the World Wide Web that runs on it have struck our species' informational ecology with a similarly explosive impact, their shockwaves rippling through our cultural, social, economic, political, technological, scientific, and even cognitive landscapes.

To understand the nature and magnitude of what is to come, consider the effects of Gutenberg's ingenious marriage of the grape press, oil-based inks, and his method for inexpensively producing movable type. Before Gutenberg, books were scarce and expensive, requiring months or years of skilled individual effort to produce a single copy. Inevitably, they were primarily prestige goods for aristocrats and clerics, their content devoted to the narrow and largely useless status or ritual preoccupations of their owners. Slow-changing vessels bearing the distant echoes of ancient tradition, books were absent from the lives of all but a tiny fraction of humanity. Books then were travelers from the past rather than signals from the present, their cargo ignorance as often as knowledge. European awareness was parochial in the strict, original sense — limited to direct experience of the parish.

Yet a few decades after Gutenberg, there were millions of books flooding Europe, many written and owned by a new book-created middle class, full of new knowledge, art, disputation, and exploration. Mental horizons — once linked to the physical horizon just a few miles away — surged outward.

Formerly, knowledge of all kinds had been fixed by authority and embedded in hierarchy, and was by assumption and intention largely static. Yet the sharp drop in the price of reproducing books shattered this stagnant and immobilizing mentality. Printing rained new Renaissance texts and newly recovered classical works across Europe; printing catalyzed the scientific revolution; printing put technological and commercial innovation onto an upward arc still accelerating today. Printing ignited the previously wasted intellectual potential of huge segments of the population — people who, without printing would have died illiterate, uneducated, without voice or legacy.

Printing summoned into existence increasingly diversified bodies of new knowledge, multiplied productive divisions of labor, midwifed new professions, and greatly expanded the middle class. It threw up voluntary, meritocratic new hierarchies of knowledge and productivity to rival traditional hierarchies of force and superstition. In short, the release of printing technology into human societies brought into being a vast new ecosystem of knowledge — dense, diverse, rapidly changing, rapidly growing, and beyond the ability of any one mind to encompass, or any government to control.

Over the previous millennium, heretics had appeared perennially, only to be crushed. Implicitly and explicitly, beyond all question, orthodoxy defined and embodied virtue. But when, after Gutenberg, heretics such as Luther gained access to printing presses, the rapid and broad dissemination of their writings allowed dissidents to muster enough socially coordinated recruits to militarily stalemate attempts by hierarchies to suppress them. Hence, the assumption of a single orthodoxy husbanded by a single system of sanctified authority was broken, beyond all recovery.

For the same reason that communist governments restricted access to Marx's and Engels' original writings, the Church had made it a death penalty offense (to be preceded by torture) to translate the Bible into the languages people spoke and understood. The radical change in attitude toward authority, and the revaluation of minds even at the bottom of society, can be seen in William Tyndale's defense of his plan to translate the Bible into English: "I defy the Pope, and all his laws; and if God spares my life, I will cause the boy that drives the plow to know more of the Scriptures than the Pope himself." (After his translation was printed, he was arrested, tied to the stake, and strangled.) Laymen, even plowboys, who now had access to Bibles (because they could both read and afford them) shockingly decided they could interpret sacred texts for themselves without the Church manipulatively interposing itself as intermediary between book and reader. Humans being what they are, religious wars followed, in struggles to make one or another doctrine (and elite) locally supreme.

Conflicts such as the Thirty Years War (with perhaps ten million dead and entire territories devastated) slowly awakened Europeans about the costs of violent intolerance, and starting among dissident Protestant communities, the recognized prerogatives of conscience and judgment devolved onto ever smaller units, eventually coming to rest in the individual (at least in some societies, and always disputed by rulers).

Freedom of thought and speech — where they exist — were unforeseen offspring of the printing press, and they change how we think. Political assumptions that had endured for millennia became inverted, making it thinkable that political legitimacy should arise from the sanction of the governed, rather than it being a natural entitlement of rulers. And science was the most radical of printing's many offspring.

Formerly, the social validation of correct opinion had been the prerogative of local force-based hierarchies, based on tradition, and intended to serve the powerful. Even disputes in natural philosophy had been settled by appeals to the textual authority of venerated ancients such as Aristotle. What alternative could there be? Yet, when the unified front of religious and secular authority began to fragment, logic and evidence could begin to play a role. What makes science distinct is that it is the human activity in which logic and evidence (suspect, because potentially subversive of authority) are allowed to play at least some role in evaluating claims.

Galileo — arguably the founder of modern science — was threatened with torture and placed under house arrest not for his scientific beliefs but rather for his deeper heresies about what validates knowledge: He argued that alongside scripture — which could be misinterpreted — God had written another book — the book of nature — written in mathematics, but open for all to see. Claims about the book of nature could be investigated using experiments, logic, and mathematics — a radical proposal that left no role for authority in the evaluation of (non-scriptural) truth. (Paralleling Tyndale's focus on the literate lay public, Galileo wrote almost all of his books in Italian rather than in Latin.) The Royal Society, founded two decades after Galileo's death, chose as their mottonullius in verba: on the authority of no one — a principle strikingly at variance with the pre-Gutenberg world.

The assumptions (e.g., I should be free to think about and question anything), methods (experimentation, statistical inference, modeling building), and content (evolutionary biology, quantum mechanics, the computational theory of mind) of modern thought are unimaginably different from those held by our ancestors living before Gutenberg. All this — to simplify slightly — because of a drop in the cost of producing books.

So what is happening to us, now that the Internet has engulfed us? The Internet and its cybernetic creatures have dropped, by many more orders of magnitude, the cost (in money, effort, and time) of acquiring and publishing information. The knowledge (and disinformation) of the species is migrating online, a click away.

To take just first order consequences, we see all around us transformations in the making that will rival or exceed the printing revolution — for example, heating up the chain reactions of scientific, technical, and economic innovation by pulling out the moderating rods of distance and delay). Quantity, Stalin said, has a quality all its own. The Internet also unleashes monsters from the id — our evolved mental programs are far more easily triggered by images than by propositions, a reality jihadi Websites are exploiting in our new round of religious wars.

Our generation is living through this transformation, so although our cognitive unconscious is hidden from awareness, we can at least report on our direct experience on how our thinking has shifted before and after. I vividly remember my first day of browsing — firing link after link after link, suspended in an endless elation as I surveyed possibility after possibility for twenty hours straight — something I still feel.

Now my browsing operates out of two states of mind: the first is broad, rapid, intuitive scanning, where I feel free to click without goals, in order to maintain some kind of general scientific and cultural awareness without drowning in the endless sea. The second is a disciplined, focused exploration, where I am careful to ignore partisan pulls and ad hominem distractions, to dispense with my own sympathies or annoyance, to strip everything away except information about causation, and paths to potential falsification or critical tests.

Like a good Kuhnian, I attempt to pay special attention to anomalies in my favored theories, which are easier to identify now that I can scan more broadly. More generally, it seems like the scope of my research has become both broader and deeper, because both cost less. Finally, my mind seems to be increasingly interwoven into the Internet — what I store locally in my own brain seems more and more to be metadata for the parts of my understanding that are stored on the Internet.

galia_solomonoff's picture

Architect; Solomonoff Architecture Studio

The Internet is producing a fundamental alteration in the relationship between knowledge, content, place and space. If we consider the world as divided into two similarly populous halves: the ones born before 1980 and the ones born after 1980 — of course there are other important differences such as gender, race, class, ethnicity, geography, etc., yet I see the 1980 as significant in the shift and alteration in the relationship of knowledge, place and space, due to the use of the Internet.

Three examples/scenes:

Example/Scene 1:

I am responding to this question from Funes, a locality of 15,000 inhabitants in the core of the Argentine Pampas (country side). I am in what is called a "locutorio"; a place with eight fully equipped computers that charges $0.20 dollars (twenty cents) for fifteen minutes of Internet use. Five other users are here. A woman in her 20's talking via Skype (with headphones) with her sister and niece in Spain, a 30+ man in a white shirt and tie scanning a resume, two teens playing a video with what I guess is a multi-placed or non-placed community. A man on a Facebook page posting photos of a baby and a trip and myself, a 42 year-old architect on vacation with an assignment due in two hours!

I am the elder here. I am the nonlocal here. Yet the computer helps me and corrects my spelling without asking anyone.

Example/Scene 2:

Years ago when I was an architectural student and wanted to know about, say, Guarino Guarini's importance as an architect, I would go two flights down the stairs at Avery Library, get a few cards, follow the numbered instructions on those index cards and find, two or four or seven feet worth of books in a shelf dedicated to the subject...then I would look at few cross referenced words in such cards, such us, "mannerist architecture", go another path in the same room, and identify another few feet of books on the subject. I would leaf through all the found books and get a vague, yet physical sense of how much there was to know about the subject matter.

Now I Google "Guarino Guarini", and in 0.45 seconds, gets 108,000 entries, and the first page reveals specific details: he was born on January 7, 1624, and lived until March 6, 1683, six images of cupolas, a Wikipedia, and Encyclopedia Britannica entry. My Google search is both very detailed yet not at all physical. I can't tell how much I like this person's personality or work. I can't decide if I want to flip through more entries.

Example/Scene 3:

I am in a car traveling from New York to Philadelphia. I have GPS but no maps. The GPS announces where to go and takes into account traffic and tolls. I trust the GPS, yet in my memory I wish to reconstruct a trip I took years ago with other friends. In that other trip I had a map, I entered the city from a bridge, the foreground was industrial and decrepit the background was vertical and contemporary...at least that is what I remember...was it so? I zoom out the GPS to see if the GPS map reveals an alternative entry route, a different way the city geography can be approached. Nothing in the GPS map looks like the space I remember. What happened? Is my memory of the place faulty or is the focus of the GPS too narrow?

The feeling I want to convey with these examples/scenes is how over time and with the advent of the internet our sense of orientation, space and place have changed, our sense of the details necessary to make decisions has changed. If decisions take into account the many ways in which information comes to us then the internet at this point privileges what we can see and read over many other aspects of knowledge and sensation. How much something weights, how does it feels, how stable it is. Are we, the ones that knew places before the internet, more able to navigate them now or less? Do we make better or worse decisions based on the content we take in? Do we have longer better rests in far away places or constant place-less-ness? How have image, space, place and content been altered to give us a sense of here and now?

arnold_trehub's picture

Psychologist, University of Massachusetts, Amherst; Author, The Cognitive Brain

As I write this, a group of neuroscientists, psychologists, and philosophers located at far-flung corners of the world have been meeting online in a workshop devoted to solving what is arguably the fundamental problem in science — the mystery of human consciousness. The Internet has given me and the other participants in this effort the opportunity to ask each other probing questions, to engage in civil argument, specify areas of agreement, clarify points of disagreement, and to suggest what we should do next to advance our scientific understanding of consciousness. All of this discussion is taking place in near real-time, and all of our comments are preserved and archived for publication.

The usual scientific conferences did provide the opportunity to meet colleagues with common interests, present papers, and discuss them within very limited time frames. But this is nothing like what the Internet now makes possible. In online workshops of the kind in which I am now engaged, serious issues can be explored among key investigators, in depth, over many months; challenges can be posed and answered, and the current landscape of a deep scientific problem can be more sharply exposed. I believe that the Internet, used this way, will play a revolutionary role in promoting our understanding of the fundamental problems at the frontiers of science.

linda_stone's picture

Hi-Tech Industry Consultant; Former Executive at Apple Computer and Microsoft Corporation

Before the Internet, I made more trips to the library and more phone calls. I read more books and my point of view was narrower and less informed. I walked more, biked more, hiked more, and played more. I made love more often.

The seductive online sages, scholars, and muses that joyfully take my curious mind where ever it needs to go, where ever it can imagine going, whenever it wants, are beguiling. All my beloved screens offer infinite, charming, playful, powerful, informative, social windows into global human experience.

The Internet, the online virtual universe, is my jungle gym and I swing from bar to bar: learning about: how writing can be either isolating or social; DIY Drones (unmanned aerial vehicles) at a Maker Faire; where to find a quantified self meetup; or how to make Sach moan sngo num pachok. I can use image search to look up hope or success or play. I can find a video on virtually anything; I learned how to safely open a young Thai coconut from this Internet of wonder.

As I stare out my window, at the unusually beautiful Seattle weather, I realize, I haven't been out to walk yet today — sweet Internet juices still dripping down my chin. I'll mind the clock now, so I can emerge back into the physical world.

The physical world is where I not only see, I also feel — a friend's loving gaze in conversation; the movement of my arms and legs and the breeze on my face as I walk outside; and the company of friends for a game night and potluck dinner. The Internet supports my thinking and the physical world supports that, as well as, rich sensing and feeling experiences.

It's no accident we're a culture increasingly obsessed with the Food Network and Farmer's Markets — they engage our senses and bring us together with others.

How has the Internet changed my thinking? The more I've loved and known it, the clearer the contrast, the more intense the tension between a physical life and a virtual life. The Internet stole my body, now a lifeless form hunched in front of a glowing screen. My senses dulled as my greedy mind became one with the global brain we call the Internet.

I am confident that I can find out about nearly anything online and also confident that in my time offline, I can be more fully alive. The only tool I've found for this balancing act is intention.

The sense of contrast between my online and offline lives has turned me back toward prizing the pleasures of the physical world. I now move with more resolve between each of these worlds, choosing one, then the other — surrendering neither.

sherry_turkle's picture

Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT; Internet Culture Researcher; Author, The Empathy Diaries

You stare at a screen in your home or in your hand. You own it; it is passive and glows — all things that seem to promise safety and a bounded space. But the feeling of sending an e-mail or text or instant message is at odds with its reality. You feel in a zone that is private and ephemeral. But the Internet is public and forever. This is the disconnect of Internet communication. It begins to explain why people, sophisticated people, continue to send damaging e-mails and text messages that document them breaking the law and betraying their families. These make the headlines. Other consequences of the disconnect show up in the inner life of the generation that has grown up with always-on/always-on-you connectivity. The disconnect shapes their psychological and political sensibility.

Dawn, eighteen, "scrubs" her Facebook pages just before she receives her college acceptance letters. She says, "I didn't want stories and pictures about high school parties and boys out there. I want a fresh start." But she could only delete so much. Her friends have pictures of her on their pages and messages from her on their walls. All of these will remain. And on the Internet, the worlds "delete" and "erase" are metaphorical; files, photographs, mail, and search history are only deleted from your sight. All of this upsets Dawn. She says, "It's like somebody is about to find a horrible secret that I didn't know I left someplace."

The psychologist and psychoanalyst Erik Erikson argued that adolescents needed an experience of "moratorium," a time and space for relatively consequence-free experimentation. They need to fall in and out of love with people and ideas. I have argued that the Internet provides such spaces and is thus a rich ground for working through identity. But over time, it has become clear that the idea of the moratorium space does not easily mesh with a life that generates its own electronic shadow. Over time, many find a way to ignore or deny the shadow. For teenagers, the need for a moratorium space is so compelling that they will recreate it as fiction. And indeed, leaving an electronic trace can come to seem so natural that the shadow seems to disappear. We want to forget that we have become the instruments of our own surveillance.

In the spirit of keeping the shadow at a distance, some work at staying uninformed. Julia, eighteen, says "I've heard that school authorities and local police can get into your Facebook," but doesn't want to know the details. "I live on Facebook" she explains, and "I don't want to be upset." A seventeen-year-old girl thinks that Facebook "can see everything," but even though "you can try to get Facebook to change things," it is really out of her hands. She sums up: "That's just the way it is." A sixteen-year-old girl says that even without privacy, she feels safe because "No one would care about my little life." For all the talk of a generation empowered by the Net, the question of online privacy brings out claims of intentionally vague understandings and protests of impotence. This is a life of resignation: teens are sure that at some point their privacy will be invaded, but that this is the course of doing business in their world.

I grew up with my grandparents who were frightened by the McCarthy era. A government that spied on its citizens; this is what their families had fled. In Eastern Europe, my grandmother explained, you assumed that other people read your mail. This never led to good. When someone knows everything, everyone can be turned into an informer. She was proud to be in America where things were different. Every morning, we went together to the mailboxes of our apartment building. And many days, she would tell me as if it had never come up before, "In America, no one can look at your mail. It's a federal offense. That's the beauty of this country." For me, and from the earliest age, this civics lessons at the mailbox joined together privacy and civil liberties. I think of how different things are for today's teenagers who accommodate to the idea that their e-mail might be scanned by school authorities and that their online identities might be tampered with. Not a few sum up their position on all of this by saying in one way or another: "The way to deal is to just be good."

But sometimes a citizenry should not "be good." You have to leave room for this, space for dissent, real dissent. You need to leave technical space (a sacrosanct mailbox) and mental space. The two are intertwined. We make our technologies and they, in turn, make and shape us. My grandmother made me an American citizen and a civil libertarian in front of a row of mailboxes in Brooklyn. I am not sure what to tell and 18-year-old who thinks that Loopt (the application that uses the GPS capability of the iPhone to show you where your friends are) seems creepy but notes that it would be hard to keep it off her phone if all her friends had it. "They would think I had something to hide."

In democracy, perhaps we all need to begin with the assumption that everyone has something to hide, a zone of private action and reflection, a zone that needs to be protected. Life with an electronic shadow provokes anxieties that lead today's teenagers to look toward a past they never knew. This nostalgia of the young looks forward because it may remind us of things that are worth protecting. So, for example, teens talk longingly about the "full attention" that is implicit when someone sends you a letter or meets with you in a face-to-face meeting. And poignantly, they talk about seeking out a pay phone when they really want to have a private conversation.

The Internet teaches us to rethink nostalgia and give it a good name. I learned to be a citizen at the Brooklyn mailboxes. To me, opening up a conversation about rethinking the Net, privacy, and civil society is not backward-looking nostalgia or Luddite in the least. It seems like part of a healthy process of democracy defining its sacred spaces.

seirian_sumner's picture

Reader, Behavioral Ecology, University College London

I was rather stumped by this question because I have little experience of work or play without the Internet. My interests and the way I think, work and play have evolved alongside the Internet. Perhaps it would help if I could work out what life would be like for me without the Internet. Abstaining from the Internet is not a feasible experiment even on a personal level! Instead, I exploited the very resource we are evaluating, and asked my friends on Facebook what they thought their lives would be like without the Internet. If I could empathize with my alter-ego in a parallel 'offline' universe where there was no Internet, perhaps I can understand how the Internet has influenced the way I think.

Initial impressions of an Internet-free life from my Facebook friends were of general horror. The Internet plays a crucial role in our personal lives: my friends said they would be 'lost', 'stressed', 'anxious' and 'isolated' without it. They were concerned about 'No 24-7 chats?'; 'How would I make new friends/meet new people?'; 'How would I keep in touch with my friends abroad?'; 'I'd actually have to buy things in person from real people!'. We depend on the Internet as our social network, to connect with friends, strangers and to access resources. Sitting at my computer, I am one of the millions of 'nodes' making up the network. Whilst physical interactions with other nodes in the network is largely impossible, I am potentially connected to them all.

Caution and suspicion of the unfamiliar are ancestral traits of humans, ensuring survival by protecting against usurpation and theft of resources. A peculiar thing about the Internet is that it makes us highly receptive and indiscriminate in our interactions with complete strangers. The other day I received a message inviting me to join a Facebook group for people sharing 'Seirian' as their first name. Can I resist? Of course not! I'll probably never meet the other 17 Seirians, but I am now a 'node' connected to a virtual network of Seirians. Why did I join? Because I had nothing to lose, there were no real consequences, and I was curious to tap into a group of people wholly unconnected with my current social network. The more friendly connections I engage in, the greater the rewards I can potentially reap. If the 'Facebook Seirians' had knocked on my real front door instead of my virtual one, would I have signed up? No, of course not — too invasive, personal and potentially costly (they'd know where I live and I can't unplug them!). Contrary to our ancestral behaviours, we tolerate invasion of privacy online, and the success of the Internet relies on this.

Connectivity comes at the cost of privacy, but it does promote information acquisition and transfer. Although the initial response from my Facebook friends was fear of disconnection, more considered responses appreciated the Internet for the incredible resource it is, and that it could never be replaced with traditional modes of information storage and transfer. 'How do I find things out?'; 'Impossible to access information'; 'You mean I have to physically go shopping/visit the library?'; 'So slow..'; 'Small life'. The Internet relies on our greed for knowledge and connections, but also on our astonishing online generosity. We show inordinate levels of altruism on the Internet, wasting hours on chat room sites giving advice to complete strangers, or contributing anonymously to Wikipedia just to enrich other people's knowledge. There is no guarantee or expectation of reciprocation. Making friends and trusting strangers with personal information (be it your bank details or musical tastes) is an essential personality trait of an Internet user, despite being at odds with our ancestral natural caution. The data we happily give away on Facebook is exactly the sort of information that communist secret police sought through interrogation. By relaxing our suspicion (or perception) of strangers and behaving altruistically (indiscriminately) we share our own resources and gain access to a whole lot more.

I thought I had too little pre-Internet experience to be able to answer this question. But now I realize that we undergo rapid evolution into a different organism every time we log on. The Internet may not necessarily change the way we think, but it certainly shapes and directs our thoughts by changing our behaviour. Offline, we may be secretive, miserly, private, suspicious and self-centered. Online, we become philanthropic, generous, approachable, friendly, and dangerously unwary of strangers. Online behaviour would be selected out in an offline world because no-one would cooperate — people don't want unprompted friendship and generosity from complete strangers. Likewise, offline behaviour does badly in an online world — unless you give a little of yourself, you get restricted access to resources. The reason for our personality change is that the Internet is a portal to lazy escapism: at the twitch of the mouse we enter a world where the consequences of our actions don't seem real. The degree to which our online and offline personas differ will of course vary from one person to another. At the most extreme, online life is literally one of care-free fantasy — Live vicariously through your flawless avatar in the fantastical world of Second Life! What better way to escape the tedium and struggles of reality that confront our offline-selves?

Is the change from offline to online behaviour adaptive? We ultimately strive to maximise our individual inclusive fitness. We can do this using our communication skills (verbal and written) to persuade other people to alter their behaviour for mutual benefits. Early hominid verbal communication and hieroglyphs were the tools of persuasion used by our ancestors. The Internet is the third great breakthrough in human communication, and our behavioural plasticity is a necessary means for exploiting it. Do we need to moderate these shifts in behaviour? One of my Facebook friends said it would be 'relaxing' without the Internet. Is our addiction to the Internet leaving us no time or space to think and process the complex stream of interactions and knowledge we get from it? Sleep is essential for 'brain sorting' — maybe offline life (behaviour) is too.

To conclude my answer to the question, the Internet changes my behaviour every time I log on and in doing so influences how I think. My daring, cheeky, spontaneous, and interactive online persona, makes me quicker-thinking and encourages me to think further outside my offline box. I think in tandem with the Internet, using its knowledge to inspire and challenge my thoughts. My essay is a testament to this – Facebook inspired my thoughts and provoked this essay, so I couldn't have done it without the Internet.

eric_r_weinstein's picture

Mathematician and Economist; Managing Director of Thiel Capital

Oddly, the Internet is still invisible to the point where many serious thinkers continue to doubt whether it changes modern thought at all.

In science we generally first learn about invisible structures from anomalies in concrete systems. The existence of an invisible neutrino on the same footing as visible particles was predicted in 1930 by Wolfgang Pauli as the error term necessary to save the principles of conservation of energy and momentum in beta decay. Likewise, human memes invisible to DNA (e.g. tunes) were proposed in 1976 by Richard Dawkins as selection, to remain valid, must necessarily include all self-replicating units of transmission involved in tradeoffs with traditional genes.

Following this line of thinking, it is possible that a generalized Internet may even be definable with sufficient care as a kind of failure of the physical world to close as a self-contained system. Were a modern Rip van Winkle sufficiently clever, he might eventually infer something like the existence of file sharing networks from witnessing the collapse of music stores, CD sales, and the recording industry's revenue model.

The most important example of this principle has to do with markets and geography. The Internet has forced me to view physical and intellectual geography as instances of an overarching abstraction co-existing on a common footing. As exploration and trade in traditional physical goods like spice, silk and gold have long been linked, it is perhaps unsurprising that the marketplace of ideas should carry with it an intellectual geography all its own. The cartography of what may be termed the old world of ideas is well developed. Journals, prizes and endowed chairs give us landmarks to which we turn in the quest for designated thinkers and for those wishing to hug the shore of the familiar this proves a great aid.

Despite being relatively stable, the center of this scientific world began to shift in the last century from institutions in Europe to ones in North America. While there is currently a great deal of talk about a second shift from the U.S. towards Asia, it may instead happen that the next great migration will be dominated by flight to structures in the virtual from those moored to the physical.

Consider the award in 2006 of the Fields medal (the highest prize in mathematics) for a solution of the Poincare Conjecture. This was remarkable in that the research being recognized was not submitted to any journal. In choosing to decline the medal, peer review, publication and employment, the previously obscure Grigori Perelman chose to entrust the legacy of his great triumph solely to an Internet archive intended as a temporary holding tank for papers awaiting publication in established journals. In so doing, he forced the recognition of a new reality by showing that it was possible to move an indisputable intellectual achievement out of the tradition of referee gated journals bound to the stacks of university libraries into a new and poorly charted virtual sphere of the intellect.

But while markets may drive exploration, the actual settlement of the frontier at times requires the commitment of individuals questing for personal freedom, and here the new world of the Internet shines. It is widely assumed that my generation failed to produce towering figures like Crick, Dirac, Grothendieck or Samuelson because something in the nature of science had changed. I do not to subscribe to that theory. Suffice it to say that issues of academic freedom have me longing to settle among the noble homesteaders now gathering on the efficient frontier of the market place of ideas. My intellectual suitcases have been packed for months now as I try to screw up the courage and proper 'efficient frontier mentality' to follow my own advice to the next generation: "Go virtual young man."

charles_seife's picture

Professor of Journalism, New York University; Former Journalist, Science Magazine; Author, Hawking Hawking

The process was so gradual, so natural, that I didn't notice it at first. In retrospect, it was happening to me long before the advent of the Internet. The earliest symptoms still mar the books in my library. Every dog-eared page represents a hole in my my memory. Instead of trying to memorize a passage in the book or remember an important statistic, I took an easier path, storing the location of the desirable memory instead of the memory itself. Every dog-ear is a meta-memory, a pointer to an idea that I wanted to retain but was too lazy to memorize.

The Internet turned an occasional habit into my primary way of storing knowledge. As the Web grew, my browsers began to bloat with bookmarked Websites, with sites that stored information that I deemed important but didn't feel obliged to commit to memory. And as search engines matured, I stopped bothering even with bookmarks; I soon relied upon Altavista, Hotbot, and then Google to help me find — and recall — ideas. My meta-memories, my pointers to ideas, started being replaced by meta-meta-memories, by pointers to pointers to data. Each day, my brain fills with these quasi-memories, with pointers and with pointers to pointers, each one a dusty IOU sitting where a fact or idea should reside.

Now, when I expend the effort to squirrel memories away, I store them in the clutter of my hard drive as much as I do in the labyrinth of my brain. As a result, I spend as much time organizing them, making sure I can retrieve them on demand, as I do collecting them. My memories are filed in folders within folders within folders, easily accessible — and searchable, in case my meta-memory of their location fails. And when a file becomes corrupt, all I am left with a pointer, a void where an idea should be, a ghost of a departed thought.

stuart_pimm's picture

Doris Duke Chair of Conservation Ecology, Duke University; Author, The World According to Pimm: a Scientist Audits the Earth

Once upon a time, we had the same world we do now. We knew little about its problems.

Wise men and women pontificated about their complete worlds, worlds that, for some, stretched only to the limits of their city centres or, sometimes, only to the grounds of their colleges. This allowed them clever conceits about what was really important in life, art, science and the rest of it.

Lesser minds would come to pay homage and, let's be honest, use the famous library since that was the only way of knowing what was known and who knew it. The centres ruled and they knew it.

It's late in evening, when I see the light on in the lab and stop by to see who else is working late. There's a conversation going on over Skype. It's totally incomprehensible. Even its sounds aren't familiar. There's no RosettaStone© for the language my two students are learning from their correspondent who sits in a café in a wretched oil town on the edge of the rainforest in Ecuador. It's only spoken by a few hundred Indians. All but their children were born as nomads in a forest that has the luck to be sitting on billions of barrels of oil. I didn't say "good luck."

In a few months, we'll be in that forest. My students will improve their language skills with the Indian women, helping them prepare chicha, by chewing manioc, spitting it into the bowl and chewing another mouthful.

With the Internet, what happens there is as exactly close as anything else I want to understand or communicate, give or take the slow phone line, or cell phone reception. When an oil company pushes a road far closer to a reserve than it promised, we'll know about it immediately. When some settlers try to clear forest, we'll know about them killing Indians just as quickly and when the Indians kill them with their spears. So will everyone else.

The Internet is instant news from remote places with photos to prove it. What we now think about instantly is suddenly much larger, more frightening, and far more challenging than it once was.

The Internet has vastly more coverage of everything, immediate, future, and past. So when we want to know who has signed which oil exploration leases to which tracts of remote forest, the data are not in Duke's library (or anyone else's), but I can get them online from the Web site of local newspapers. And I can do that in the forest clearing, surrounded by those who futures have been signed away. Knowledge is now everywhere. You can find it from everywhere too.

Internet has vastly increased the size of the problem set about humanity's future. Some problems now look really puny. They probably always were.

Who does the thinking has changed too. When knowledge is everywhere, so are the thinkers.

matthew_ritchie's picture

Supposedly the Internet was invented at CERN. If CERN is really responsible for this infinitely large filing cabinet, filled to bursting by lunatics, salesmen, hobbyists and pornographers, that folds up like Masefield's box of delights and fits into my pocket, then CERN poses an even larger threat to the world than the fabled potential production of black holes.

Nonetheless, I use it, or does it use me? Is it a new cultural ecology, an ecology of mind? If it is, who are the real predators, who is being eaten on-line? Is it me?

Once I longed to create an interface that would simulate my interaction with the real world. Now I realize that the interface I want is the real world. Can the Internet give me that back?

Is it an archive? I can learn a new idea every day on the Internet. I have learned about many old ideas and many false ideas. I have read many obvious lies. This capacity to indefinitely sustain a lie is celebrated as freedom. Denialism enters stage left, cloaked as skepticism. We need a navigation system we can trust. Someday soon we'll need our 20th century experts and interpreters to be replaced by 21st century creator-pilots.

Is it an open system? It seems impossible to find out on the Internet what it really costs the planet to sustain the Internet and its toys, what it costs our culture to think, to play, to fondle and adore itself. Seven of the world's largest corporations own all the routers and cables. Everyone pays the ferryman.

Is it liberating? The old, the poor and the uneducated are locked out. Everyone else is locked in. All studies show mass users locked in reversed and concentric learning patterns, seeking only the familiar, even, perhaps especially, if novelty is their version of the same old thing. As a shared space, it is a failure, celebrating only those that obey its rules. We sniff out our digital blazes, following the circular depletion of our own curiosity reservoirs. We are running out of selves.

Is it really just about communication? To travel is to enter a world of monastic chimes and insectile clicks, as unloved cell phone chatter is replaced by mobile anchorites locked in virtual communion with their own agendas and prejudices, cursing when their connections fail and they are returned to the real, immediate world. But unplugging only returns us, and them, to a space in-waiting, designed and ordered by the same system.

Is it a new space? If this is true, then immediately I am drawn to the implied space inevitably also being created, the anti-net. If it's a new space, how big are we, when we are on-line? But what's really missing here? Meaning, touch, time and place are what's missing here. We need a holographic rethinking of scale and content.

But like you, I'm back every day, 'collaborating' as they say. Because there is something being built, or building itself, in this not-yet space. Perhaps the Internet we know is merely a harbinger and like Ulysses returning, dirty, false and lame, it will only truly reveal itself when we are ready. Perhaps it will unfold itself soon and help us bring the real ecology back to life, unveil the conspiracies, shatter the mirrors, tear down the walls, rejoice and bring forth the promise that is truly waiting in us, waiting only for it's release. I'm ready now.

steven_pinker's picture

Johnstone Family Professor, Department of Psychology; Harvard University; Author, Rationality

As someone who believes both in human nature and in timeless standards of logic and evidence, I'm skeptical of the common claim that the Internet is changing the way we think. Electronic media aren't going to revamp the brain's mechanisms of information processing, nor will they supersede modus ponens or Bayes' theorem. Claims that the Internet is changing human thought are propelled by a number of forces: the pressure on pundits to announce that this or that "changes everything"; a superficial conception of what "thinking" is that conflates content with process; the neophobic mindset that "if young people do something that I don't do, the culture is declining." But I don't think the claims stand up to scrutiny.

Has a generation of texters, surfers, and twitterers evolved the enviable ability to process multiple streams of novel information in parallel? Most cognitive psychologists doubt it, and recent studies by Clifford Nass confirm their skepticism. So-called mutlitaskers are like Woody Allen after he took a speed-reading course and devouredWar and Peace in an evening. His summary: "It was about some Russians."

Also widely rumored are the students who cannot write a paper without instant-message abbreviations, emoticons, and dubious Web citations. But students indulge in such laziness to the extent that their teachers let them get away with it. I have never seen a paper of this kind, and a survey of university student papers by Andrea Lunsford shows they are mostly figments of the pundits' imaginations.

The way that intellectual standards constrain intellectual products is no more evident than in science. Scientists are voracious users of the Internet, and of other computer-based technologies that are supposedly making us stupid, like Powerpoint, electronic publishing, and email. Yet it would be ludicrous to suggest that scientists think differently than they did a decade ago, or that the progress of science has slowed.

The most interesting trend in the development of the Internet is not how it is changing people's ways of thinking but how it is adapting to the way that people think. The leap in Internet usage that accompanied the appearance of the World Wide Web more than a decade ago came from its user interface, the graphical browser, which worked around the serial, line-based processing of the actual computer hardware to simulate a familiar visual world of windows, icons, and buttons. The changes we are seeing more recently include even more natural interfaces (speech, language, manual manipulation), better emulation of human expertise (as in movie, book, or music recommendations, and more intelligent search), and the application of Web technologies to social and emotional purposes (such as social networking, sharing of pictures, music, and video) rather than just the traditional nerdy ones.

To be sure, many aspects of the life of the mind have been affected by the Internet. Our physical folders, mailboxes, bookshelves, spreadsheets, documents, media players, and so on have been replaced by software equivalents, which has altered our time budgets in countless ways. But to call it an alternation of "how we think" is, I think, an exaggeration.

rudy_rucker's picture

Mathematician; Computer Scientist; Cyberpunk Pioneer; Novelist, Infinity and the Mind, Postsingular, and (with Bruce Sterling) Transreal Cyberpunk.

Twenty or thirty years ago, people dreamed of a global mind that knew everything and could answer any question. In those early times, we imagined that we'd need a huge breakthrough in artificial intelligence to make the global mind work â€" we thought of it as resembling an extremely smart person. The conventional Hollywood image for the global mind's interface was a talking head on a wall-sized screen.

And now, in 2010, we have the global mind. Search-engines, user-curated encyclopedias, images of everything under the sun, clever apps to carry out simple computations â€" it's all happening. But old-school artificial intelligence is barely involved at all.

As it happens, data, and not algorithms, is where it's at. Put enough information into the planetary information cloud, crank up a search engine, and you've got an all-knowing global mind. The answers emerge.

Initially people resisted understanding this simple fact. Perhaps this was because the task of posting a planet's worth of data seemed so intractable. There were hopes that some magically simple AI program might be able to extrapolate a full set of information from a few well-chosen basic facts â€" just a person can figure out another person on the basis of a brief conversation.

At this point, it looks like there aren't going to be any incredibly concise aha-type AI programs for emulating how we think. The good news is that this doesn't matter. Given enough data, a computer network can fake intelligence. And â€" radical notion â€" maybe that's what our wetware brains are doing, too. Faking it with search and emergence. Searching a huge data base for patterns.

The seemingly insurmountable task of digitizing the world has been accomplished by ordinary people. This results from the happy miracle that the Internet is that it's unmoderated and cheap to use. Practically anyone can post information onto the Web, whether as comments, photos, or full-blown Web pages. We're like worker ants in a global colony, dragging little chunks of data this way and that. We do it for free; it's something we like to do.

Note that the Internet wouldn't work as a global mind if it were a completely flat and undistinguished sea of data. We need a way to locate the regions that are most desirable in terms of accuracy and elegance. An early, now-discarded, notion was that we would need some kind of information czar or committee to rank the data. But, here again, the anthill does the work for free.

By now it seems obvious that the only feasible way to rank the Internet's offerings is to track the online behaviors of individual users. By now it's hard to remember how radical and rickety such a dependence upon emergence used to seem. No control! What a crazy idea. But it works. No centralized system could ever keep pace.

An even more surprising success is found in user-curated encyclopedias. When I first heard of this notion, I was sure it wouldn't work. I assumed that trolls and zealots would infect all the posts. But the Internet has a more powerful protection system than I'd realized. Individual users are the primary defenders.

We might compare the Internet to a biological system in which new antibodies emerge to combat new pathogens. Malware is forever changing, but our defenses are forever evolving as well.

I am a novelist, and the task of creating a coherent and fresh novel always seems in some sense impossible. What I've learned over the course of my career is that I need to trust in emergence also known as the muse. I assemble a notes document filled with speculations, overheard conversations, story ideas, and flashy phrases. Day after day, I comb through my material, integrating it into my mental Net, forging links and ranks. And, fairly reliably, the scenes and chapters of my novel emerge. It's how my creative process works.

In our highest mental tasks, any dream of an orderly process is a will-o'-the wisp. And there's no need to feel remorseful about this. Search and emergence are good enough for the global mind â€" and they're good enough for us.

emily__pronin's picture

Associate Professor of Psychology, Princeton University

A subject in a psychology experiment stands in a room with various objects strewn around the floor and two cords hanging from the ceiling. He is tasked with finding ways to tie the two cords together. The only problem is that they are far enough apart that if he grabs onto one, he cannot reach the other. After devising some obvious solutions (such as lengthening one of the cords with an extension cord), the subject is stumped. Then, the experimenter casually bumps into one of the cords, causing it to swing to and fro. The subject suddenly has a new idea! He swings one cord towards the other, thus allowing him to reach both at once.

Here's something interesting about this experiment: Subjects failed to recognize the experimenter's role in leading them to this new idea. They believed that the thought of swinging the cord just "dawned" on them, or that it resulted from systematic analysis, or from consulting physics principles, or from images they conjured of monkeys swinging in trees. As this experiment and others like it (reviewed in a classic article by Richard Nisbett and Timothy Wilson) illustrate, people are unaware of the particular influences that produce their thoughts. We know what we think, but we don't know why we think it. When a friend claims that it is her penchant for socialist ideals that leads her to support the latest healthcare reform bill, it might be wise for you to assume she likes the bill but to doubt her reasons why (and she ought to share your skepticism!).

This brings me to the question of how the Internet has changed the way I think. The problem is this: When it comes to my thoughts, I can honestly tell you what I think (about everything from mint chip ice cream to e-mail… I love the former and am ambivalent about the latter), but I can only speculate as to why I think those things (does my love of mint chip ice cream reflect its unique flavor, or fond childhood memories of summer vacations with my pre-divorced parents?).ÂHow has the Internet changed the way I think? I can't really say, because I have no direct knowledge of what influences my thinking.

The idea that my own mental processes are impenetrable to me is a tough one to swallow. It's hard to accept that, at a very basic level, I don't know what's going on in my own head. At the same time, the idea has a certain obviousness to it â€" of course I can't recount the enormous complexity of biochemical processes and neural firing that gives rise to my thoughts. The typical neuron in my brain has 1000s of synaptic connections to other neurons. Sound familiar?

The Internet's most popular search tool also feeds me thoughts (tangible ideas encoded in words) via a massively-connected system that operates in way that is hidden to me. The obscurity of Google's inner workings (or the Net's more generally) makes its potential impact on my thoughts somewhat unnerving. My thinking may be influenced by unexpected search hits and extraneous words and images that are derived via a process beyond my comprehension and control. So while I have the feeling that it's me driving the machine, perhaps it's more the machine driving me. But wait, hasn't that always been the case? Same process, different machine.

douglas_rushkoff's picture

Media Analyst; Documentary Writer; Author, Throwing Rocks at the Google Bus

How does the Internet change the way I think? It puts me in the present tense. It's as if my cognitive resources are shifted from my hard drive to my RAM. That which is happening right now is valued, and everything in the past or future becomes less relevant.

The Internet pushes us all toward the immediate. The now. Every inquiry is to be answered right away, and every fact or idea is only as fresh as the time it takes to refresh a page.

And as a result, speaking for myself, the Internet makes me mean. Resentful. Short-fused. Reactionary.

I feel it when I'm wading through a stack of emails, keeping up with an endless Twitter feed, accepting Facebook "friends" from a past I prefer not to remember, or making myself available on the Web to readers to whom I should feel grateful â€" but instead feel obligated. And it's not a matter of what any of these folks might want me to do, but when. They want it now.

This is not a bias of the Internet itself, but of the way it has changed from an opt-in activity to an "always on" condition of my life. The bias of medium was never towards real-time activity, but towards time shifting. Unix, the operating system of the Net, doesn't work in real time. It sits and waits for human commands. Likewise, early Internet forums and bulletin boards were discussions users returned to at their convenience. I dropped in the conversation, then came back the next evening or next week to see how it had developed. I took the time to consider what I might say â€" to contemplate someone else's response. An Internet exchange was only as rich as the amount of time I allowed to pass between posts.

Once the Internet changed from a resource at my desk into an appendage chirping from my pocket and vibrating on my thigh, however, the value of depth was replaced by that of immediacy masquerading as relevancy. This is why Google is changing itself from a search engine to a "live" search engine, why email devolved to SMS and blogs devolved to tweets. It's why schoolchildren can no longer engage in linear arguments, why narrative structure collapsed into reality TV, why and why almost no one can engage in meaningful dialogue about long-term global issues. It creates an environment where a few incriminating emails between scientists generate so more news than our much slower but more significant climate crisis.

It's as if the relentless demand of networks for me to be everywhere, all the time, denies me access to the moment in which I am really living. And it is this sense of disconnection â€" more than distraction, multi-tasking, or long-distance engagement â€" that makes the Internet so aggravating.

In some senses, this was the goal of those who developed the computers and networks on which we depend today. Technology visionaries such as Vannevar Bush and James Licklider sought to develop machines that could do our remembering for us. Computers would free us from the tyranny of the past â€" as well as the horrors of World War II â€" allowing us to forget everything and devote our minds to solving the problems of today. The information would still be there â€" it would simply be stored out of body, in a machine.

And that may have worked had technological development leaned towards the option of living life disconnected from those machines whenever access to their memory banks was not required. Instead, I feel encouraged to use networks not just to access information, but to access other people, and to grant them access to me â€" wherever and whenever I happen to be.

This always-on approach to digital technology surrenders my nervous system rather than expanding it. Likewise, the simultaneity of information streaming towards me prevents parsing or consideration. It becomes a constant flow which must be managed, perpetually.

The now-ness of the Internet engenders impulsive, unthinking responses over considered ones, and a tendency to think of communications as a way to bark orders or fend off those of others. I want to satisfy the devices chirping and vibrating in my pockets, only to make them stop. Instead of looking at each digital conversation as an opportunity for depth, I experience them as involuntary triggers of my nervous system. Like my fellow networked humans, I now suffer the physical and emotional stresses previously associated with careers such as air traffic controllers and 911 operators.

By surrendering my natural rhythms to the immediacy of my networks, I am optimizing myself and my thinking to my technologies â€" rather than the other way around. I feel as though I speeding up, when I am actually just becoming less productive, less thoughtful, and less capable of asserting any agency over the world in which I live. The result something akin to future shock. Only in our era, it's more of a present shock.

I try to look at the positive: Our Internet-enabled emphasis on the present may have liberated us from the 20th century's dangerously compelling ideological narratives. No one â€" well, hardly anyone â€" can still be persuaded that brutal means are justified by mythological ends. And people are less likely to believe employers' and corporations' false promises of future rewards for years of loyalty now.

But, for me anyway, it has not actually brought me into greater awareness of what is going on around me. I am not approaching some Zen state of an infinite moment, completely at one with my surroundings, connected to others, and aware of myself on any fundamental level.

Rather, I am increasingly in a distracted present, where forces on the periphery are magnified and those immediately before me are ignored. My ability to create a plan â€" much less follow through on it â€" is undermined by my need to be able to improvise my way through any number of external impacts which stand to derail me at any moment. Instead of finding a stable foothold in the here and now, I end up reacting to ever-present assault of simultaneous impulses and commands.

The Internet tells me I am thinking in real time, when what it really does, increasingly, is take away the real and take away the time.

robert_provine's picture

Professor Emeritus, University of Maryland, Baltimore County; Author, Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond

At the end of my college lectures, students immediately flip-open their cellphones, checking for calls and texts. In the cafeteria, I observe students standing in queues, texting, neglecting fellow students two feet away. Late one afternoon, I noticed six students wandering up-and-down a long hallway while using cellphones, somehow avoiding collision, like ships cruising in the night, lost in a fog of conversation, or like creatures from The Night of the Living Dead. A student reported emailing during a "computer date," not leaving her room on a Saturday night. Paradoxically, these students were both socially engaged and socially isolated.

My first encounter with people using unseen phone headsets was startling; they walked through an air terminal apparently engaging in soliloquies or responding to hallucinated voices. More is involved than the displacement of snail mail by email, a topic of past decades; face-to-face encounters are being displaced by relations with a remote, disembodied conversant somewhere in cyberspace. These experiences forced a rethinking of my views about communication, technological and biological, ancient and modern, and prompted research projects examining the emotional impact, novelty and evolution of social media.

The gold standard for interpersonal communication is face-to-face conversation in which you can both see and hear your conversant. In several studies, I contrasted this ancestral audiovisual medium with cellphone use in which you hear but do not see your conversant, and texting in which you neither see nor hear your conversant. Conversations between deaf signers provided a medium in which individuals see but not hear their conversant.

The telephone, cell or land line, provides a purely auditory medium that transmits two-way vocal information, including the prosodic (affective) component of speech, but filters the visual signals of gestures, tears, smiles and other facial expressions. The purely auditory medium of the telephone is, itself, socially and emotionally potent, generating smiles and laughs in remote individuals, a point confirmed by observation of 1,000 solitary people in public places. Unless using a cellphone, isolated people are essentially smile less, laugh less and speechless. (We confirmed the obvious because the obvious is sometimes wrong.) Constant, emotionally rewarding vocal contact with select, distant conversants is a significant contributor to the worldwide commercial success of cellphones. Radio comedy and drama further demonstrate the power of a purely auditory medium, even when directed one-way from performer to audience. While appreciating the inventions of the telephone and broadcasting, it occurred to me that the ability to contact unseen conversants is a basic property of the auditory sense; it's as old as our species and occurs every time that we speak with someone in the dark or not in our line of sight. Phones become important when people are beyond shouting distance.

The emotional communication between individuals who can see but not hear their conversant was explored in a study of deaf individuals with collaborator Karen Emmorey. We observed vocal laughter and associated social variables in conversations between deaf signers using American Sign Language. Despite their inability to hear their conversational partner, deaf signers laughed at the same places in the stream of signed speech, at similar material, and showed the same gender patterns of laughter as hearing individuals during vocal conversations. An emotionally rich dialogue can be, therefore, conducted with an exclusively visual medium that filters auditory signals and passes only visual ones. Less nuanced visual communication is ancient and used when communicating beyond vocal range via such signals as gestures, flags, lights, mirrors, or smoke.

Text messaging, whether meaty emails or telegraphic tweets, involves conversants who can neither see nor hear each other and are not interacting in real time. My research team examined emotional communication online by analyzing the placement of 1,000 emoticons in Website text messages. Emoticons resembled conversational laughter in their placement in the text-stream — they seldom interrupted phrases. For example, you may text, "You are going where on vacation? Lol," but not "You are — lol — going where on vacation?"

Technophiles writing about text messaging sometime justify emoticon use as a response to the "narrowing of band-width" characteristic of text messaging, ignoring that text viewed on a computer monitor or cellphone is essentially identical to that of a printed page. I suspect that emoticon use is a likely symptom of the limited literary prowess of texters. Know what I mean? Lol. Readers seeking the literary subtleties of irony, paradox, sarcasm, or sweet sorrow are unlikely to find it in text messages. Although not providing immediate, long distance contact, physically transported handwritten text messages have existed since clay tables and papyrus, and could be faster than commonly thought. Unless checked frequently, electronic text messaging may not be faster than the postal service of 18th Century London that had up to six deliveries per day and offered the possibility of a same-day receipt and response. A century later, telegraphy provided an even faster pre-Internet text option.

The basic cellphone has morphed into a powerful, mobile, multimedia communication device and computer terminal that is a major driver of Internet society. It gives immediate, constant contact with select, distant conversants, and can tell you where you are, where you should go next, how to get there, provide diversions while waiting, and document your journey with text, snaps and video images. For some, this is enhanced reality, but it comes at the price of the here-and-now. Whatever your opinion and level of engagement, the cellphone and related Internet devices are profound social prostheses — almost brain implants — that have changed our lives and culture.

karl_sabbagh's picture

Producer; Founder, Managing Director, Skyscraper Productions; Author, The Antisemitism Wars: How the British Media Failed Their Public

When the British playwright Harold Pinter developed cancer of the oesophagus, his wife, Lady Antonia Fraser, discovered from the Internet that there was a 92% mortality rate. "If you have cancer, don't go on the Internet," she said in an interview published byÂThe Sunday Times in January 2010.

This set me thinking about my own interactions with the Internet, and how they might differ fundamentally from using any other sources of information.

Lady Antonia could, I suppose, have said, "If you have cancer, don't look at the Merck Manual," or some other medical guide, but there must be more to it than that. It is, first of all, the effortlessness with which it can be used. I used to joke that if I had a query which could be answered by consulting a book in the shelves on the other side of my study or by using the Internet, it would be quicker and less energy-consuming to find the answer on the Internet. It's not even funny any more, because it's obviously the most efficient way to do things. I am one of the few people who seem to trust Wikipedia. Its science entries, in particular, are extremely thorough, reliable and well-sourced. People who trust books (two or more years out of date) rather than Wikipedia are like people who balk at buying on the Internet for security reasons but happily pay with a credit card in restaurants where an unscrupulous waiter could keep the carbon copy of the slip and run up huge bills before they knew it.

Lady Antonia Fraser's remark was really a tribute to the reliability and comprehensiveness of the Internet. It wasn't so much that she came across a pessimistic forecast of Harold's prognosis, more that it was probably a reliableÂpessimistic forecast, based on up-to-date information. It doesn't of course mean that it was accurate. She may not have consulted all cancer sites, or it may be that no one really knows for sure what the prognosis was for oesophageal cancer. But she assumed â€" and I assume myself when using the Internet â€" that with a little skill and judgment you can get more reliable information there than anywhere else.

This, of course, has nothing to do with thinking. It could be that I would think the same if I'd been writing my books with a quill pen and had only the Bible, Shakespeare and Dr. Johnson's Dictionary to consult. But the Internet certainly constrains what I think about. It stops me thinking any more about that great idea for a book that I now find was published a few years ago by a small university press in Montana.

It also reinforces my belief in my own ideas and opinions because it is now much quicker to test them, particularly when they are new opinions. By permittingÂanyone to publishÂanything, the Internet allows me to read the whole range of views on a topic, and infer from the language used the reasonableness or otherwise of the views. Of course, I was inclined to disbelieve in Intelligent Design before I had access to the wide range of wacky and hysterical Websites that promote it. But now I have no doubts at all that the theory is tosh. (SLANG CHIEFLY BRIT nonsense; rubbish â€"ÂThe Free Dictionary)

But this is still not to do withÂthinking. What do I do all day, sitting at my computer? I string words together, reread them, judge them, improve them if necessary and print them out or send them to people. And underlying this process is a judgement about what is interesting, novel or in need of explanation, and the juggling of words in my mind to express these concepts in a clear way. None of that, as far as I am aware, has changed because of the Internet.

But this is to deal with only one aspect of the Internet, its provision of factual content. There is also email and attachments and blogs and software downloads and You Tube and Facebook and Internet shopping and banking and weather forecasts and Googlemaps and and and…. But before all this, I knew there were lots of people in the world, capable of using language and saying clever or stupid things. Now I have access to them in a way I didn't before, but again this is just information provision rather than a change in ways of thinking.

Perhaps the crucial factor is speed. If I was setting out to write a book, I would start with a broad outline and a chapter breakdown, and these would lead me to set a series of research tasks which could take months: look in this library, write to this expert, look for this book, find this document. Now the order of things has changed. While I was doing all the above, which could take weeks or months, my general ideas for the book would be evolving. My objectives might change, and my research tasks with them. I would do more 'broad brush' thinking. Now, when documents can be found and downloaded in seconds, library catalogues consulted from one's desk, experts emailed and a reply received within 24 hours, the idea is set in stone much earlier. But even here there is no significant* difference in thinking. If, in the course of the research, some document reveals a different an â€" gle, the fact that this happens within hours or days rather than months can only be to the good. The broad brush thinking is now informed rather than uninformed.

I give up. The InternetÂhasn'tÂchanged how I think. It's only a tool. An electric drill wouldn't change how I many holes I make in a piece of wood, it would only make the hole-drilling easier and quicker. A car doesn't change the nature and purpose of a journey I make to the nearest town, it only makes it quicker and leads to me making more journeys, than if I walked.

But what about Lady Antonia Fraser? Is the truth-telling power of the Internet something to avoid? The fact is, the Internet reveals in its full horror the true nature of mankind â€" its obsessions, the triviality of its interests, its scorn for logic or rationality, its inhumanity, the power of capital, the intolerance of the other. But anyone who says this is news just doesn't get out enough. The Internet magnifies and specifies what we know already about mankind, or if we don't we're rather naïve. The only way my thinking would have been changed by this 'revelation' would have been if I believed along with Dr Pangloss that all is for the best in the best of all possible worlds. And I don't.

steven_r_quartz's picture

Neuroscientist; Professor of Philosophy, Caltech; Co-author, Cool

I don't know how the Internet is changing the way I think because I don't know how I think. For that matter, I don't think we know very much about how anyone thinks. Most likely our current best theories will end up relegated to the dustbin as not only wrong but misleading. Consider, for example, our tendency to reduce human thought to a few distinct processes. We've been doing this for a long time: Plato divided the mind into three parts, as did Freud. Today, many psychologists divide the mind into two (as Plato observed, you need at least two parts to account for mental conflict, as in that between reason and emotion). These dual-systems views distinguish between automatic and unconscious intuitive processes and slower and deliberative cognitive ones. This is appealing, but it suffers from considerable anomalies. Deliberative, reflective cognition has long been the normative standard for complex decision-making â€" the subject of decision theory and microeconomics. Recent evidence, however, suggests that unconscious processes may actually be better at solving complex problems.

Based on a misunderstanding of its capacity, our attention to normative deliberative decision-making probably contributed to a lot of bad decision-making. As attention turns increasingly to these unconscious, automatic processes, it is unlikely that they can be pigeon-holed into a dual-systems view. Theoretical neuroscience offers an alternative model with 3 distinct systems, a Pavlovian, a Habit, and a Goal-Directed system, each capable of behavioral control. Arguably, this provides a better understanding of human decision-making â€" the habit system may guide us to our daily Starbucks fix (even if we no longer like it), while the Pavlovian system may cause us to choose a pastry once there despite our goal of losing weight. But this too likely severely under-estimates the number of systems that constitute thought. If a confederacy of systems constitute thought, is their number closer to 4 or 400? I don't think we have much basis today for answering one way or another.

Consider also the tendency to treat thought as a logic system. The canonical model of cognitive science views thought as a process involving mental representations and rules for manipulating those representations (a language of thought). These rules are typically thought of as a logic, which allows various inferences to be made and allows thought to be systematic (i.e., rational).

Despite more than a half-century of research on various logics (once constituting the entire field of non-monotonic logics), we still don't know even the broad outlines of such a logic. Even if we did know more about its form, it turns out that it would not apply to most thought processes. That is, most thought processes appear not to conform to cognitive science's canonical view of thought. Instead, much of thought appears to rest on parallel, associative principles - all those currently categorized as automatic, unconscious ones, including probably most of vision, memory, learning, problem-solving, and decision-making. Here, neural network research, theoretical neuroscience, and contemporary machine learning provide suggestive early steps regarding these processes, but remain rudimentary. The complex dynamics underlying non-propositional forms of thought remain an essential mystery.

We also know very little about how brain processes underlie thought. We do not understand the principles by which a single neuron integrates signals, nor even the 'code' it uses to encode information and to signal it to other neurons. We do not yet have the theoretical tools to understand how a billion of these cells interact to create complex thought. How such interactions create our inner mental life and give rise to the phenomenology of our experience (consciousness) remains, I think, as much of a fundamental mystery today as it did centuries ago.

Finally, there is a troubling epistemological problem: to know whether the Internet is changing how I think my introspection into my own thinking would have to be reliable. Too many clever psychology and brain imaging experiments have made me suspicious of my own introspection. In place of the Cartesian notion that our mind is transparent to introspection, it is very likely that numerous biases undermine the possibility of self-knowledge, making our thinking as impermeable to ourselves as it is to others.

paul_saffo's picture

Technology Forecaster; Consulting Associate Professor, Stanford University

Back in the mid-1700s, Samuel Johnson observed that there were two kinds of knowledge: that which you know, and that which you know where to get. It was a moment when cheap and abundant print coupled with reliable postal networks triggered an information explosion that dramatically changed the way people thought. Johnson's insight was crucial because until then scholars relied heavily on the first kind of knowledge, the ability to know and recall scarce information. Abundant print usurped this task and in the process created the need for a new skill â€" Johnson's knowing "where to get it."

Print offloaded knowing from memory to paper and in the process triggered a revolution focused on making knowledge easier to get. Johnson's great Dictionary of the English Language â€" the first modern dictionary â€" was an exemplar of this effort, followed in the next century by innovations from Roget's thesaurus, to catalogs, index cards and file cabinets. As the store of paper-based knowledge grew, the new skill of research displaced the old skill of recall. A scholar could no longer get by on memory alone â€" one had to know where and how to get knowledge.

Now the Internet is changing how we think again. Just as print took over the once-human task of knowing, cyberspace is assuming the task of knowing where to get what we seek. A single click now accomplishes what once required days in a research library. A well-phrased search query is vastly more effective than resort to a card catalogue, and one no longer needs to master a thesaurus just to find a synonym. Knowing where to get is now the domain of machines, not humans.

Make something easy to do and skills once reserved to elites will become tools of the masses. Electronic calculators were not mere slide rule substitutes; they made computation convenient and accessible to everyone. The Internet is changing our thinking by giving the tremendous power of search to the most casual of users. We have democratized knowledge-finding in the same way 18th century publishing democratized knowledge access.

Computers have become intellectual bulldozers for the curious, but the result falls short of the utopian knowledge future hoped for at the dawn of the Internet. Back in Johnson's time the public reveled in their newfound access, buying up books, consuming newspapers and sending endless streams of letters to friends. It must have been exhilarating, but much of it was to utterly no purpose. Now we revel in search, but most of what we search for isn't worth seeking, as the top search lists on Google, Yahoo and Bing make clear. Couch potatoes who once channel-surfed their way through TV's vast wasteland have morphed into mouse potatoes Google-surfing the vaster wasteland of Cyberspace. They are wasting their time more interactively, but they are still wasting their time.

The Internet has changed our thinking, but if it is to be a change for the better, we must add a third kind of knowledge to Johnson's list â€" the knowledge of what matters. Two centuries ago the explosion of print demanded a new discipline of knowing where to find knowledge. When looking up was hard, one's searches inevitably tended towards seeking only what really mattered. Now that finding is easy, the temptation to chase down info-fluff is as seductive as a 17th century Londoner happily wallowing in books with no purpose. Without a discipline of knowing what matters, we will merely amuse ourselves to death.

Knowing what matters is more than mere relevance. It is the skill of asking questions that have purpose, that lead to larger understandings. Formalizing this skill seems as strange to us today as a dictionary must have seemed in 1780, but I'll bet it emerges just as surely as print abundance led to whole new disciplines devoted to organizing information for easy access. The need to determine what matters will inspire new modes of cyber-discrimination and perhaps even a formal science of determining what matters. Social media hold great promise as discrimination tools, and AI hints at the possibility of cyber-Cicerones who would gently keep us on track as we traverse the vastness of cyberspace in our enquiries. Perhaps the 21st century equivalent of the Great Dictionary will be assembled by a wise machine that knows what matters most.

raqs_media_collective's picture

Artists, Media Practitioners, Curators, Editors and Catalysts of Cultural Processes

We are a collective of three people who began thinking together, almost twenty years ago, before any one of us ever touched a computer, or had logged on to the Internet.

In those dark days of disconnect, in the early years of the final decade of the last century in Delhi, we plugged into each other's nervous systems by passing a book from one hand to another, by writing in each other's notebooks. Connectedness meant conversation. A great deal of conversation. We became each other's databases and servers, leaning on each other's memories, multiplying, amplifying and anchoring the things we could imagine by sharing our dreams, our speculations and our curiosities.

At the simplest level, the Internet expanded our already capacious, triangulated nervous system to touch the nerves and synapses of a changing and chaotic world. It transformed our collective capacity to forage for the nourishment of our imaginations and our curiosities. The libraries and archives that we had only dreamt of were now literally at our fingertips. The Internet brought with it the exhilaration and the abundance of a frontier-less commons along with the fractious and debilitating intensity of de-personalized disputes in electronic discussion lists. It demonstrated the possibilities of extraordinary feats of electronic generosity and altruism when people shared enormous quantities of information on peer-to-peer network and at the same time it provided early exposure to and warnings about the relentless narcissism of vanity blogging. It changed the ways in which the world became present to us and the ways in which we became present to the world, forever.

The Internet expands the horizon of every utterance or expressive act to a potentially planetary level. This makes it impossible to imagine a purely local context or public for anything that anyone creates today. It also de-centres the idea of the global from any privileged location. No place is any more or less the centre of the world than any other anymore. As people who once sensed that they inhabited the intellectual margins of the contemporary world simply because of the nature of geo-political arrangements, we know that nothing can be quite as debilitating as the constant production of proof of one's significance. The Internet has changed this one fact comprehensively. The significance, worth or import of one's statements is no longer automatically tied to the physical facts of one's location along a still unequal geo-political map.

While this does not mean that as artists, intellectuals or creative practitioners we stop considering or attending to our anchorage in specific co-ordinates of actual physical locations, what it does mean is that we understand that the concrete fact of our physical place in the world is striated by the location's transmitting and receiving capacities, which turns everything we choose to create into either a weak or a strong signal. We are aware that these signals go out, not just to those we know and to those who know us, but to the rest of the world, through possibly endless relays and loops.

This changes our understanding of the public for our work. We cannot view our public any longer as being arrayed along familiar and predictable lines. The public for our work, for any work that positions itself anywhere vis-a-vis the global digital commons is now a set of concentric and overlapping circles, arranged along the ripples produced by pebbles thrown into the fluid mass of the Internet. Artists have to think differently about their work in the time of the Internet because artistic work resonates differently, and at different amplitudes. More often than not, we are talking to strangers on intimate terms, even when we are not aware of the actual instances of communication.

This process also has its mirror. We are also listening to strangers all the time. Nothing that takes place anywhere in the world and is communicated on the Internet is at a remove any longer. Just as everyone on the Internet is a potential recipient and transmitter of our signals, we too are stations for the reception and relay of other people's messages. This constancy of connection to the nervous systems of billions of others comes with its own consequences.

No one can be immune to the storms that shake the world today. What happens down our streets becomes as present in our lives as what happens down our modems. This makes us present in vital and existential ways to what might be happening at great distance, but it also brings with it the possibility of a disconnect with what is happening around us, or near us, if they happen not to be online.

This is especially true of things and people that drop out, or are forced to drop out of the network, or are in any way compelled not to be present online. This foreshortening (and occasionally magnification) of distances and compression of time compels us to think in a more nuanced way about attention. Attention is no longer a simple function of things that are available for the regard of our senses. With everything that comes to our attention we have to now ask - 'what obstacles did it have to cross to traverse the threshold of our considerations' - and while asking this we have to understand that obstacles to attention are no longer a function of distance.

The Internet also alters our perception of duration. Sometimes, when working on an obstinately analog process such as the actual fabrication of an object, the internalized shadow of fleeting Internet time in our consciousness makes us perceive how the inevitable delays inherent in the fashioning of things (in all their messy 'thingness') ground us into appreciating the rhythms of the real world. In this way, the Internet's pervasive co-presence with real world processes, ends up reminding us of the fact that our experience of duration is now a layered thing. We now have more than one clock, running in more than one direction, at more than one speeds.

The simultaneous availability of different registers of time made manifest by the Internet also creates a continuous archive of our online presences and inscriptions. A message is archived as soon as it is sent. The everyday generation of an internal archive of our work, and the public archive of our utterances (on online discussion lists and on facebook) mean that nothing (not even a throwaway observation) is a throwaway observation anymore. We are all accountable to, and for, the things we have written in emails or posted on online fora. We are yet to get a full sense of what this actually implies in the longer term. The automatic generation of a chronicle and a history colours the destiny of all statements. Nothing can be consigned to amnesia, even though it may appear to be insignificant. Conversely, no matter how important a statement may have appeared when it was first uttered, its significance is compromised by the fact that it is ultimately filed away as just another datum, a pebble, in a growing mountain range.

Whosoever maintains an archive of their practice online is aware of the fact that they alter the terms of their visibility. Earlier, one assumed invisibility to be the default mode of life and practice. Today, visibility is the default mode, and one has to make a special effort to withhold any aspect of one's practice from visibility. This changes the way we think about the relationship between the private memory and public presence of a practice. It is not a matter of whether this leads to a loss of privacy or an erosion of spaces for intimacy, it is just that issues such as privacy, intimacy, publicity, inclusion and seclusion are now inflected very differently.

Finally, the Internet changes the way we think about information. The fact that we do not know something that exists in the extant expansive commons of human knowledge can no longer intimidate us into reticence. If we do not know something, someone else does, and there are enough ways around the commons of the Internet that enable us to get to sources of the known. The unknown is no longer that which is unavailable, because whatever is present is available on the network and so can be known, at least nominally if not substantively. A bearer of knowledge is no longer armed with secret weapons. We have always been auto-didacts, and knowing that we can touch what we do not yet know and make it our own, makes working with knowledge immensely playful and pleasurable. Sometimes, a surprise is only a click away.

scott_sampson's picture

President & CEO, Science World British Columbia; Dinosaur paleontologist and science communicator; Author, How To Raise A Wild Child

Like many others, my personal experience is that the Internet is both theGreat SourceÂfor information and theÂGreat Distractor, fostering compulsions to stay "connected," often at the expense of other, arguably more valuable aspects of life. I do not sense that the Internet alters the way that I think as much as it does the way I work; having the Great Source close at hand is simply irresistible, and I generally keep a window open on my laptop for random searches that pop into my head.

Nevertheless, I am much less concerned about "tweeners" like me who grew up before the Internet than I am with children of the Internet age, so-called "Digital Natives." I want to know how the Internet changes the waytheyÂthink. As will no doubt be confirmed by answers to theÂEdge Annual Question, the jury is still out. Although the supporting research may still be years away, it seems likely that a lifetime of daily conditioning dictated by the rapid flow of information across glowing screens will generate substantial changes in brains, and thus thinking. Commonly cited potential effects include fragmented thinking and shorter attention spans together with a concomitant reduction (let alone interest) in reflection, introspection, and in-depth thought. Another oft-noted concern is the nature of our communications, which are becoming increasingly terse and decreasingly face-to-face.

But I have a larger fear, one rarely mentioned in these discussionsâ€"the extinction of experience. This term, which comes from author Robert Michael Pyle, refers to the loss of intimate experience with the natural world. Clearly, anyone who spends 10-plus hours each day with their attention focused on a screen is not devoting much time to experiencing the "real" world. More and more, it seems, real-life experience is being replaced by virtual alternatives. And, to my mind at least, this is a grave problem. Let me explain.

As the first generation to contemplate the fact that humanity may have a severely truncated future, we live at arguably the most pivotal moment in the substantial history ofÂHomo sapiens. Decisions made and actions taken during the next generation will have an imbalanced impact on the future of humans and all other life on Earth. If we blunder onward on our present courseâ€"increasing populations, poverty, greenhouse gas emissions, and habitat destructionâ€"we face no less than the collapse of civilization and the decimation of the biosphere. Given the present dire circumstances, any new far-reaching cultural phenomenon must be evaluated in terms of its ability to help or hinder the pressing work to be done; certainly this concern applies to how the Internet influences thinking.

Ecological sustainability, if it is to occur, will include greener technologies and lifestyles. In addition, however, we require a shift in worldview that re-configures our relationship with non-human nature. To give one prominent example of our current dysfunctional perspective, how are we to achieve sustainability as long as we see nature as part of the economy rather than the inverse? Instead of a collection of resources available for our exploitation, nature must become a community of relatives worthy of our respect and a teacher to whom we look for inspiration and insight. In contrast to the present day, sustainable societies will likely be founded on local foods, local materials, and local energy. They will be run by people who have a strong passion for place and a deep understanding of the needs of those places. And I see no way around the fact that this passion and understanding will be grounded in direct, firsthand experiences with those places.

My concern, then, is this: How are we to develop new, more meaningful connections to our native communities if we are staring at computer screens that connect us only to an amorphous worldwide "community?" As is evident to anyone who has stood in a forest or on a seashore, there is a stark difference between a photograph or video and the real thing. Yes, I understand the great potential for the Internet to facilitate fact-finding, information sharing, and even community-building of like-minded people. I am also struck by the radical democratization of information that the Internet may soon embody. But how are we to establish affective bonds locally if our lives are consumed by virtual experiences on global intermedia? What we require is uninterrupted solitude outdoors, sufficient time for the local sights, sounds, scents, tastes, and textures to seep into our consciousness. What we are seeing is children spending less and less time outdoors actually experiencing the real world and more and more time indoors immersed in virtual worlds.

In effect, my argument is that the Internet may influence thinking indirectly through its unrelenting stranglehold on our attention and the resultant death (or at least denudation) of non-virtual experience. If we are to care about larger issues surrounding sustainability, we first must care about our local places, which in turn necessitates direct experiences in those places. As Pyle observes, "what is the extinction of the condor to a child who has never known a wren?"

One thing is certain. We have little time to get our act together. Nature, as they say, bats last. Ultimately, I can envision the Internet as a Net positive or a Net negative force in the critical sustainability effort, but I see no way around the fact that any positive outcome will involve us turning off the screens and spending significant time outside interacting with the real world, in particular the nonhuman world.

neri_oxman's picture

Architect, Researcher, MIT; Founder, Materialecology

'I, myself, alone, have more memories than all mankind since the world began', he said to me. And also:''My dreams are like other people's waking hours'. And again, toward dawn: 'My memory, sir, is like a garbage heap.'—Funes, el Memorioso, Jorge Luis Borges

Funes, His Memory tells the evocative tale of Ireneo Funes, a Uruguayan boy who suffers an accident which leaves him hopelessly immobilized along with an acute form of Hypermnesia — a mental abnormality expressed in exceptionally precise memory. So vivid is Funes' memory that he can effortlessly distinguish any physical object at every distinct time of viewing. In his perpetual present images unfold their archaeology as infinite wells of detailed information:"He knew the forms of the clouds in the southern sky on the morning of April 30th, 1882". Funes'memories are intensely present as muscular and thermal sensations accompanying every visual record to have been recorded. He is able to reconstruct every event he had ever experienced. His recollections are so accurate that the time it takes to reconstruct an entire day's worth of events equals to the duration of that very day. In Funes' world perception makes no sense at all as there is simply no time or motive to perceive, reflect, or interpret.

As a consequence, Funes lacks the ability for detail suppression and any attempt to conceive of, or manage, his impressions — the very stuff of thought — is overridden with relentlessly literal recollections ("We, in a glance, perceive three wine glasses on the table; Funes saw all the shoots, clusters, and grapes of the vine".)Funes is not able to generalize, to deduce or to induce anything he experiences. Things are just what they are, scaled one to one. Cursed with meticulous memory, Funes escapes to live in remoteness and isolation — a "dark room" — where new images do not enter and where his motionless figure is absorbed in the contemplation of a sprig of Artemisia.

Over a century later, Hypermnesia appears to have been to Funes what the World Wide Web is today to the human race.

An inexhaustible anthology of every possible thing recorded at every conceivable location in any given time, the Internet is displacing the role of memory and it does so immaculately. Any imaginable detail about the many dimensions of any given experience is being either recorded or consumed as yet another fragment of reality. There is no time to think, it seems. Or perhaps, this is just a new kind of thinking. Is the Web yet another model of reality, or is reality becoming a model of the Web?

In his "On Exactitude in Science", Borges carries on with similar ideas concerningtrace as he describes an empire in which the craft of cartography attained such precision that its map has emerged as large as the kingdom it depicts. Scale, or difference, was now replaced by repetition. A model within itself, such a map embodies the dissimilarity between reality and its representation. It becomes the territory itself and the origin loses authenticity; it achieves the state of being more real than real as there is no reality left to chart.

The Internet, no doubt, has become such a map of the world, both literally and symbolically, as it traces in an almost 1:1 ratio every event that has ever taken place. One cannot afford to get lost in a space so perfectly detailed and predictable. Physical navigation is completely solved as online maps offer even the most exuberant flâneur the knowledge of prediction. But there are also enormous mental implications to this.

As we are fed with the information required or desired to understand and perceive the world around us thus withers the very power of perception, and the ability to engage in abstract and critical thought atrophies. Models become the very reality that we are asked to model.

If one believes that the wetware source of intellectual production, whether in the arts or sciences, is guided by the ability to critically model reality, to scale information and to engage in abstract thought, where are we heading in the age of the Internet? Are we being victimized by our own inventions? The Internet may well be considered an oracle, the builder of composite and hybrid knowledge, but as it is today — is its present instantiation actually inhibiting the very cognitive nature of reflective and creative thought?

Funes is portrayed as an autistic savant, with the gift of memorizing anything and everything. This gift eventually drives him mad but Borges is said to have constructed Funes' image to suggest the "waste of miracle" and point at the vast and dormant potential we still encompass as humans. In letting the Internet think for us, as it were, are we encouraging the degeneration of our own mental capacities? Is the Internet making us obliviously somnolent?

Between the associative nature of memory and the referential eminence of the map lies a blueprint for the brain. In the ambience of future ubiquitous technologies looms the promise of an ecstasy of connectivity (or thus is the vision of new consciousness à la Gibson and Sterling). If such a view of augmented interactivity is even remotely accurate (as it must be), it is the absence of a cognate presence that defies the achievement of transforming the Internet to a new reality, a universally accessible medium for enhanced thinking. If the Internet can potentially become an alternative medium of human consciousness, how then can a cognate presence inspire the properties of infinite memory with the experiential and the reflective, all packaged for convenience and pleasure in a Mickey Mouse like antenna cap?

In Borges' tale, Funes cites a revealing line from the Latin Naturalis Historia. In the section entitled memory, it reads:

"ut nihil non iisdem verbis redderetur auditum"

So that, nothing that has been heard can be retold in the same words.

lisa_randall's picture

Physicist, Harvard University; Author, Dark Matter and the Dinosaurs

The plural of anecdotes is not data â€" but anecdotes are all I have. We don't yet understand how we think or what it means to change the way we think. Scientists are making inroads and ultimately hope to understand much more. But right now all I and my fellow contributors can do are make observations and generalize.

We don't even know if the Internet changes the way we read. It certainly changes how we read, as it changes how we do many aspects of our work. Maybe it ultimately changes how our brains process written information but we don't yet know. Still, the question of how the Internet changes how we think is an enormous problem, one that anecdotes might help us understand. So I'll tell a couple (if I can focus long enough to do so.)

Someone pointed out to me once that he, like me, never uses a bookmark in a book. I always attributed my negligence to disorganization and laziness â€" the few times I attempted to use a bookmark I promptly misplaced it. â€" But what I realized after this was pointed out is that not using â€" bookmarks was my choice. It doesn't make sense to find a place in a book that you technically have read but that is so far from your memory that you don't remember having read it. By not using a bookmark, I was guaranteed to return to the last continuous section of text that actually made a dent in my brain.

With the Internet we tend to absorb multiple pieces of information about whatever topic we decide we're interested in. Online, we search. In fact Marvin Minsky recently told me that he prefers reading on an electronic device in general because he values the search function. And I certainly often do too. In fact I tend to remember the answer to the pointed pieces of information I ask about on the Internet better than I do when reading a long book. But there is also the danger that something valuable about reading in a linear fashion, absorbing information internally, and processing it as we go along is lost with the Internet or even electronic devices, where it is too easy to cheat by searching.

One aspect of reading a newspaper that I've already lost a lot of is the randomness that comes with reading in print rather than online. Today I read the articles that I know will interest me when I'm staring at a computer screen and have to click to get to the actual article. When I read print papers â€" something I do less and less-my eyes are sometimes drawn to an interesting piece â€" or even advertisement â€" that I would never have chosen to look for. Despite its breadth, and the fact that I can be so readily distracted, I still use the Internet in a targeted fashion.

So why don't I stick to print media? The Internet is great for disorganized people like me who don't want to throw something away for fear of losing something valuable they missed. I love knowing everything is still on line and that I can find it. I hate newspapers piling up. I love not having to be in an office to check books. I can make progress at home, on a train, or on a plane (when there is enough room between rows to open my computer). Of course as a theoretical physicist I could do that before as well â€" it just meant carrying a lot more weight.

And I do often take advantage of the Internet's breadth, even if it is a little more directed. A friend might send me to a Web site. Or I might just need or want to learn about some new topic. The Internet also allows me to be bolder. I can quickly get up to speed on a topic I previously knew nothing about. I can check facts and I can learn other's points of view on any subject I decide is interesting. I can write about subjects I wouldn't have dared to touch before, since I can quickly find out the context in a way that was previously much more difficult to access.

Which brings me back to the idea of the quote "the plural of anecdotes is not data." I thought I should check who deserves the attribution. It's not entirely clear but it might go back to a pharmacologist named Frank Kotsonis, who was writing about the effects of aspartame. I find this particularly funny because I stopped consuming aspartame due to my personal anecdotal evidence that it made me focus less well. But I digress.

Here's the truly funny aspect of the quote I discovered with my Google search. The original quote from the Berkeley political scientist Raymond Wolfinger was exactly the opposite, "The plural of anecdotes is data." I'm guessing this depends on what kind of science you do.

The fact is that the Internet provides a wealth of information. It doesn't yet organize it all or process it or arrange for scientific conclusions. The Internet allows us (as a group) to believe both facts and their opposites; we'll all find supporting evidence or opinions.

But we can attend talks without being physically present and work with people we've never met in person. We have access to all physics papers as they are churned out but we still have to figure out which are interesting and process what they say.

I don't know how differently we think. But we certainly work differently and do so at a different pace. We can learn many anecdotes that aren't yet data.

Though all those distracting emails and Web sites can make it hard to focus!

larry_sanger's picture

Co-founder of Wikipedia and Citizendium

The instant availability of an ocean of information has been an epoch-making boon to humanity. But has the resulting information overload also deeply changed how we think? Has it changed the nature of the self? Has it even â€" as some have suggested â€" radically altered the relationship of the individual and society? These are important philosophical questions, but vague and slippery, and I hope to clarify them.

The Internet is changing how we think, it is suggested. ButÂhow is it, precisely? One central feature of the "new mind" is that it is spread too thin. But what doesÂthat mean?

In functional terms, being spread too thin means we have too many Websites to visit, we get too many messages, and too much is "happening" online and in other media that we feel compelled take on board. Many of us lack effective strategies for organizing our time in the face of this onslaught. This makes us constantly distracted and unfocused, and less able to perform heavy intellectual tasks. Among other things, or so some have confessed, we cannot focus long enough to read whole books. We feel unmoored and we flow along helplessly wherever the fast-moving digital flood carries us.

We do? Well â€"Âsome of us do, evidently.

Some observers speak of "where we are going," or of how "our minds" are being changed by information overload, apparently despite ourselves. Their discussions make erstwhile free agents mere subjects of powerful new forces, and the only question is where those forces are taking us. I don't share the assumption here. When I read the title of Nick Carr's essay, "Is Google Making Us Stupid?" I immediately thought, "Speak for yourself." It seems to me that in discussions like Carr's, it is assumed that intellectual control has already been ceded â€" but that strikes me as being a cause, not a symptom, of the problem Carr bemoans. After all, the exercise of freedom requires focus and attention, and the ur-event of the will is precisely focus itself. Carr unwittingly confessed for too many of us a moral failing, a vice; the old name for it is intemperance. (In the older, broader sense, contrasted withÂsophrosyne, moderation or self-control.) And, as with so much of vice, we want to blame it on anything but ourselves.

Is it really true that we no longer have any choiceÂbut to be intemperate in how we spend our time, in the face of the temptations and shrill demands of networked digital media? New media are notÂthat powerful. We still retain free will, which is the ability to focus, deliberate, and act on the results of our own deliberations. If we want to spend hours reading books, we still possess that freedom. OnlyÂphilosophical argument could establish that information overload has deprived us of our agency. The claim at root is philosophical, not empirical.

My interlocutors might cleverly reply that we now, in the age of Facebook and Wikipedia, do still deliberate, but collectively. In other words, for example, we vote stuff up or down on Digg, del.icio.us, and Slashdot, and then we might feel ourselves obligated â€" if we're participating as true believers â€" to pay special attention to the top-voted items. Similarly, we attempt to reach "consensus" on Wikipedia, and â€" again, if participating as true believers â€" endorse the end result as credible. To the extent that our time is thus directed by social networks, engaged in collective deliberation, then we are subjugated to a "collective will," something like Rousseau's notion of a general will. To the extent that we plug in, we become merely another part of the network. That, anyway, is how I would reconstruct the collectivist-determinist position that is opposed to my own individualist-libertarian one.

But we obviously have the freedom not to participate in such networks. And we have the freedom to consume the output of such networks selectively, and holding our noses â€" to participate, we needn't be true believers. So it is very hard for me to take the "woe is us, we're growing stupid and collectivized like sheep" narrative seriously. If you feel yourself growing ovine, bleat for yourself.

I get the sense that many writers on these issues aren't much bothered by the un-focusing, de-liberating effects of joining the Hive Mind. Don Tapscott has suggested that the instant availability of information means we don't have to "memorize" anything anymore â€" just consult Google and Wikipedia, the brains of the Hive Mind. Clay Shirky seems to believe that in the future we will be enculturated not by reading dusty old books but in something like online fora, plugged into the ephemera of a group mind, as it were. But surely, if we were to act as either of these college teachers recommend, we'd become a bunch of ignoramuses. Indeed, perhaps that's what social networks are turning too many kids into, as Mark Bauerlein argues cogently inÂThe Dumbest Generation. (For the record, I've started homeschooling my own little boy.)

The issues here are much older than the Internet. They echo the debate between progressivism and traditionalism found in philosophy of education: should children be educated primarily so as fit in well in society, or should the focus be on training minds for critical thinking and filling them with knowledge? For many decades before the advent of the Internet, educational progressivists have insisted that, in our rapidly changing world, knowing mere facts is not what is important, because knowledge quickly becomes outdated; rather, being able to collaborate and solve problems together is what is important. Social networks have reinforced this ideology, by seeming to make knowledge and judgment collective functions. But the progressivist position on the importance of learning facts and training individual judgment withers under scrutiny, and, pace Tapscott and Shirky, events of the last decade have not made it more durable.

In sum, there are two basic issues here. Do we have any choice about ceding control of the self to an increasingly compelling "Hive Mind"? Yes. And should we cede such control, or instead strive, temperately, to develop our own minds very well and direct our own attention carefully? The answer, I think, is obvious.

mark_pagel's picture

Professor of Evolutionary Biology, Reading University, UK; Fellow, Royal Society; Author, Wired for Culture

The Internet isn't changing the way I or anybody else thinks. We know this because we can still visit some people on Earth who don't have the Internet and they think the same way that we do. My general purpose thinking circuits are hard wired into my brain from genetic instructions honed over millions of years of natural selection. True, the brain is plastic, it responds to the way it is brought up by its user, or to the language it has been taught to speak, but its fundamental structure is not changed this way, except perhaps in extremis, maybe eight hours per day of computer games.

But the Internet does takes advantage of our appetites, and this changes our thoughts, if not the way we think. Our brains have appetites for thinking, learning, feeling, hearing and seeing. They like to be used. It is why we do crossword puzzles and brain-teasers, read books and visit art galleries, watch films, and play or listen to music. Our brain appetites act as spurs to action, in much the same way that our emotions do; or much the same way that our other appetites â€" for food and sex â€" do. Those of us throughout history who have acted on our world â€" even if just to wonder why fires start, why the wind blows out of the southwest, or what would happen if we combined heat with clay, will have been more successful than those of us who sat around waiting for things to happen.

So, the Internet is brain candy to me and, I suspect, to most of us â€" it slakes our appetite to keep our brain occupied. That moment when a search engine pops up its 1,278,000 search results to my query is a moment of pure injection of glucose into my brain. It loves it. It is why so many of us keep going back for more. Some think that this is why the Internet is going to make us lazy, less-literate, and less-numerate, that we will forget what lovely things books are, and so on. But even as brain candy I think the Internet's influence on these sorts of capabilities and pleasures is probably not as serious as the curmudgeons and troglodytes would have you believe. They will be the same people who grumbled about the telegraph, trains, the motorcar, the wireless, and television.

There are far more interesting ways that the Internet changes our thoughts, and especially the conclusions we draw, and it does this also by acting on our appetites. I speak of contagion, false beliefs, neuroses â€" especially medical and psychological â€" conspiracy theories, and narcissism. The technical point is this: the Internet tricks us into doing bad mathematics; it gets us to do a mathematical integration inside our brains that we don't know how to do. What? In mathematics, integration is a way of summing an infinite number of things. It is used to calculate quantities like volumes, areas, rates, and averages. Our brains evolved to judge risks, to assess likelihood or probabilities, to defend our minds against undue worry, and to infer what others are thinking, by sampling and summing or averaging across small groups of people, most probably the people in my tribe. They do this automatically, and normally without us even knowing about it.

In the past my assessment of the risk of being blown up by a terrorist, or of getting swine flu, or of my child being snatched by a pedophile on the way to school, was calculated from the steady input of information I would have received mainly from my small local group, because these were the people I spoke to or heard from and these were the people whose actions affected me.

What the Internet does, and what mass communication does more generally is to sample those inputs from the 6.8 billion people on Earth. But my brain is still considering that the inputs arose from my local community, because that is the case its assessment circuits were built for. That is what I mean by bad mathematics. My brain assumes a small denominator (that is the bottom number in a fraction) with the result that the answer to the question of how likely something is to happen is too big.

So, when I hear every day of children being snatched my brain gives me the wrong answer to the question of risk: it has divided a big number (the children snatched all over the world) by a small number (the tribe). Call this the 'Madeleine McCann' effect. We all heard months of coverage of this sad case of kidnapping â€" still unresolved â€" and although trivial compared to what the McCann's suffered, it has caused undue worry in the rest of us.

The effects of the bad mathematics don't stop with judging risks. Doing the integration wrong, means that contagion can leap across the Internet. Contagion is a form of risk assessment with an acutely worrying conclusion. Once it starts on the Internet, everyone's bad mathematics make it explode. So, do conspiracy theories: if it seems everyone is talking about something, it must be true! But this is just the wrong denominator again. Neuroses and false beliefs are buttressed: we all worry about our health and in the past would look around us and find that no one else is worrying or ill. But consult the Internet and 1,278,000 people (at least!) are worrying, and they've even developed Websites to talk about their worry. The 2009 swine flu pandemic has been a damp squib but you wouldn't have known that from the frenzy.

The bad mathematics can also give us a sense that we have something useful to say. We'd all like to be taken seriously and evolution has probably equipped us to think we are more effective than we really are, it seeds us with just that little bit of narcissism. A false belief perhaps but better for evolution to err on the side of getting us to believe in ourselves than not to. So, we go up on the Internet and make Websites, create Facebook pages, contribute to YouTube and write Web logs and, surprise, it appears that everyone is looking at or reading them, because look at how many people are leaving comments! Another case of the wrong denominator.

The maddening side of all this is that neither I nor most others can convince ourselves to ignore these worries, neuroses, narcissistic beliefs and poor assessments of risk â€" to ignore our wrong thoughts â€" precisely because the Internet has not changed the way we think.

martin_rees's picture

Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Fellow, Trinity College; Author, From Here to Infinity

In 2002, three Indian mathematicians (Manindra Agrewal, and his two students Neeraj Kayal and Nitin Saxena) invented a faster algorithm for factoring large numbers â€" an advance that could be crucial for code-breaking. They posted their results on the Web. Such was the interest that within just a day, 20000 people had downloaded the work, which became the topic of hastily-convened discussions in many centres of mathematical research around the world.

This episode â€" offering instant global recognition to two young Indian students â€" offers a stark contrast with the struggles of a young Indian genius a hundred years ago. Srinivasa Ramanujan, a clerk in Bombay, mailed long screeds of of mathematical formulae to G H Hardy, a professor at Trinity College, Cambridge. Fortunately, Hardy had the percipience to recognise that Ramanujan was not the typical green-ink scribbler who finds numerical patterns in the bible or the pyramids, but that his writings betrayed preternatural insight. Hardy arranged for Ramanujan to come to Cambridge, and did all he could to foster his genius â€" sadly, however, culture shock and poor health led him to an early death.

The Internet enables far wider participation in front-line science; it levels the playing field between researchers in major centres and those in relative isolation, hitherto handicapped by inefficient communication. It has transformed the way science is communicated and debated. More fundamentally, it changes how research is done, what might be discovered, and how students learn.

And it  allows new styles of research. For example, in the old days, astronomical information, even if in principle publicly available, was stored on delicate photographic plates: these were not easily accessible, and tiresome to analyse. Now, such data (and, likewise, large datasets in genetics or particle physics) can be accessed and downloaded anywhere. Experiments, and natural events such as tropical storms or the impact of a comet on Jupiter, can be followed in real time by anyone who is interested. And the power of huge computing networks can be deployed on large data sets.

Indeed, scientific discoveries will increasingly be made by 'brute force' rather than by insight. IBM's 'Deep Blue' beat Kasparov not by thinking like him, but by exploiting its speed to explore a huge variety of options. There are some high-priority scientific quests â€" for instance, the recipe for a room-temperature superconductor, or the identification of key steps in the origin of life â€" which may yield most readily neither to insight nor to experiment, but to exhaustive computational searches.

Paul Ginsparg's arXiv.org archive transformed the literature of physics, establishing a new model for communication over the whole of science. Far fewer people today  read traditional journals. These have so far survived as guarantors of quality. But even this role may soon be trumped by a more informal system of quality control, signaled by the approbation of discerning readers (by analogy with the grading of restaurants by gastronomic critics), by blogs, or by Amazon-style reviews.

Clustering of experts in actual institutions will continue, for the same reason that  high-tech expertise congregates in Silicon Valley and elsewhere. But the actual progress of science will be driven by ever more immersive technology where propinquity is irrelevant. Traditional universities will survive insofar as they offer mentoring and personal contact to their students. But it's less clear that there will be a future for the 'mass university' where the students are offered little more than a passive role in lectures (generally of mediocre quality) with minimal feedback. Instead, the Internet will offer access to outstanding lectures â€" and in return will offer the star lecturers (and perhaps the best classroom teachers too) a potentially global reach.

And it's not just students, but those at the end of their career, whose lives the IInternet can transformatively enhance. We oldies, as we become less mobile, will be able to immerse ourselves â€" right up to until the final switch-off, or until we lose our wits completely â€" in an ever more sophisticated cyber-world allowing virtual travel and continuing engagement with the world

robert_sapolsky's picture

Neuroscientist, Stanford University; Author, Behave

I should start by saying that I'm not really one to ask about such things, as I am an extremely unsophisticated user of the Internet. I've never sold anything on E-Bay, bought anything from Amazon or posted something on You Tube. I don't have an avatar on Second Life and I've never "met" anyone online. And I've never successfully defrauded the wealthy widow of a Nigerian dictator. So I'm not much of an expert on this.

However, like most everyone else, I've wasted huge amounts of time wandering around the Internet. As part of my profession, I think a lot about the behavior of primates, including humans, and the behavior manifest in the Internet has subtly changed my thinking. Much has been made of the emergent properties of the Internet. The archetypal example, of course, is Wikipedia.

A few years back, Nature commissioned a study that showed that when it came to accuracy about hard-core science facts, Wikipedia was within hailing distance of the Encyclopedia Britannica. Immensely cool â€" within just a few years, a self-correcting bottom-up system of quality that's fundamentally independent of authorities from on high is breathing down the neck of the mother of all sources of knowledge. The proverbial wisdom of crowds. It strikes me that there may be a very interesting consequence of this. When you have generations growing up with bottom-up emergence as routine, when wisdom of the crowd phenomena tell you more accurately what movies you'll like than can some professional movie critic, people are more likely to realize that life can have emerged with all its adaptive complexity without some omnipotent being with a game plan.

As another plus, the Internet has made me think that the downtrodden have a slightly better chance of being heard â€" the efficacy of the crowd. A small example of that recent elections in which candidates have run Internet campaigns. Far more consequential, of course, is the ability of the people to vote online about who should win American Idol. But what I'm beginning to think is possible is that someday, an abused populace will rise up, and doing nothing more than sitting at their computers and hacking away, freeze a government and bring down a dictator. Forget a Velvet Revolution. An Online Revolution.

Mind you, amid that optimism, it's hard not to despair a bit at the idiocy of the crowd, as insane rumors careen about the Internet.

However, the thing that has most changed my thinking is the array of oddities online. By this, I don't mean the fact that 147 million people have watched Charlie Bit Me, with another 20 million watching the various remixes. That's small change. I mean the truly strange Websites. Like the ones for people with apotemnophilia, a psychiatric disease where the person wishes to lose a limb.

There's someone who sol Webd a piece of gum online for $263 that Britney Spears had spit out. A Website for people who like to chew on ice cubes. Websites (yes, plural) for people who are aroused by pictures of large stuffed animals "having sex." And one for people who have been cured of that particular taste by Jesus. An online museum of barf bags from airlines from around the world. A Website store for people who like to buy garden gnomes and stab them in the head with sharp things. And then post pictures of it. On and on. The weirdness of (subsets of) the crowd.

As a result of wasting my time over the years surfing the Internet, I've come to better understand how people have a terrible craving to find others like themselves, and the more unconventional the person, the more the need. I've come to realize that there can be wildly unforeseen consequences in a material world crammed with the likes of barf bags and garden gnomes. And most of all, the existence of these worlds has led me to appreciate more deeply the staggering variety and richness of the internal lives of humans. So maybe not such a waste of time.

gregory_paul's picture

Independent Researcher; Author, The Princeton Field Guide of Dinosaurs

Being among those who have predicted that humans will be uploading their minds into cybermachines in the not too distant future, one might assume I'm enthusiastic about the Internet. But the thinking of my still primate mind about the new mode of information exchange is more ambiguous.

No doubt the Internet is changing the way I operate and influence the world around me. Type "gregory paul religion and society" into Google and nearly four million hits come up. I'm not entirely sure what that means, but it looks impressive. An article in a Brit newspaper on my sociological research garnered over 700 comments. Back in the 20th century I could not imagine my technical research making such an impression on the global sociopolitical scene because the responsible mechanism â€" publishing in open access online academic journals â€" was not available. The new communication environment is undoubtedly altering my research and publicity strategy relative to what it would be in a less digital world. Even so, I am not entirely sure how my actions are being modified. The only way to find out would be to run a parallel universe experiment in which everything is the same except for the existence of an Internet type of communications, and see what I do in the alternative situation.

What is disturbing to this human raised on hard copy information transmission is how fast the Internet is destroying a large portion of the former. My city no longer has a truly major newspaper, and the edgy, free City Paper is a pale shadow of its former self in danger of extinction. I have enjoyed living a few blocks from a major university library because I could casually browse through the extensive journal stacks, leafing through assorted periodicals to see what was up in the latest issues. Because the search was semi-random it was often pleasantly and usefully serendipitous. Now that the Hopkins library has severely cut back on paper journals as the switch to online continues it is less fun. It's good to save trees, and looking up a particular article is often easier online, but checking the contents of latest issue of Geology on the library computer is neither as pleasant nor convenient. I suspect that the range of my information intake has narrowed, and that can't be good.

On the positive side, it could be amazingly hard to get basic info before the Web showed up. In my teens I was intrigued by the notorious destruction of the HMS Hood in 1941, but was not able to get a clear impression of the famed vessel's appearance for a couple of years until I saw a friend's model, and I did not see a clear image until well after that. Such extreme data deprivation is thankfully over due to Wikipedia, etc. But even the Internet cannot fill all information gaps. It often remains difficult to search out obscure details of the sort found only in books that can look at subjects in depth. Websites often reference books, but if the Internet limits the production of manuscript length works then the quality of information is going to suffer.

As for the specific question of how the Internet is changing my thinking, online apps facilitate the statistical analyses that are expanding my sociological interests and conclusions further than I ever thought they would go, leading to unanticipated answers to some fundamental questions about popular religion that I am delighted to uncover. Beyond that there are more subtle effects, but exactly what they are I am not sure sans the parallel world experiment. I also fear that the brevity favored by on screen versus page turning reading is shortening my attention span. It is as if one of Dawkins's memes is altering my unwilling mind like a bad science fiction story. But that's a non-quantitative, anecdotal impression; perhaps I just think my thinking has changed. It is possible the new arrangement is not altering my mental exertions further than it is because the old fashioned mind generated by my brain remains geared to the former system.

The new generation growing up immersed in the digital complex may be developing thinking processes more suited for the new paradigm for better or for worse. But as far as I know that's a hypothesis rather than a documented fact. Perhaps human thinking is not as amenable to being modified by external factors as one might expect. And the Internet may be more retro than it first seems. The mass media of the 20th century was truly novel because the analog based technology turned folks from home entertainers and creators (gathering around the piano and singing and inventing songs and the like) to passive consumers of a few major outlets (sitting around the telly and fighting over the remote). People are using hyperfast digital technology to return to self-creativity and entertainment. How all this is affecting young psyches is a matter for sociobehavioral and neuropsychological research to sort out.

But how humans old and young are effected may not matter all that much. In the immediacy of this early 21st century moment the Internet revolution may look more radical than it actually is, it could merely introduce the real revolution. The human domination of digital communications will be a historically transitory event if and when high-level thinking cyberminds start utilizing the system. The ability superintelligences to share and mull over information will dwarf what mere humans can manage. Exactly how will the interconnected uberminds think?

Hell if I know.

ed_regis's picture

Science writer; Author, Monsters

The Internet is not changing the way I think (nor, so far as I am concerned, the way anyone else thinks, either, but that is not theÂEdgeÂquestion). To state the matter somewhat naively, I continue to think the same way I always thought: by using my brain, my five (or six) senses, and by considering the relevant available information. I mean, how else can you think?

What it has changed for me is my use of time. The Internet is simultaneously the world's greatest time-saver and the greatest time-waster in history. As a time-saver, I'm reduced to stating the obvious: the Web embodies practically the whole of human knowledge, and most of it's only a mouse click away. An archive search that in the past might have taken a week, plus thousands of miles of travel, can now be done at blitz speeds in the privacy of your own home or office. Etcetera.

The flip side, however, is that the Internet is also the world's greatest time sink. This was explicitly acknowledged as a goal by the two twenty-something developers of one of the famous Web sites or browsers or search engines, I forget which (it may have been Yahoo), who once jocularly said: "We developed this thing so that you don't have to waste time to start wasting time. Now you can start wasting time right away."

As indeed you can. In the newsprint age, I studiously avoided reading the papers on the dual grounds that (a) the news from day to day is pretty much the same ("renewed fighting in Bosnia," "suicide bomber kills X people in Y city"), and (b) in most cases you can do absolutely nothing about it anyway. Besides, it's depressing.

These days, though, while the news content remains exactly the same as before, I am a regular reader of the New York Times online, plus of course Google News, plus my local paper. Plus I check the stock market many times daily, plus the weather, the Doppler radar, blogs, where I sometimes get into stupid, mind-sapping, time-eating flame wars, read the listserves that I subscribe to, check out Miata.net for any spiffy new Miata products or automotive gossip, deal with my e-mail…and this doesn't even half cover the Homeric catalog of Internet ships that I sail on from day to day.

Of course I don't have to do any of this stuff. No one forces me to. I can only blame myself.

Still, the Internet is so seductiveâ€"which is odd considering that it's so passive an agency. It doesn't actually do anything. It hasn't cured cancer, the common cold, or even hiccups.

The Internet is a miracle and a curse. Mostly a miracle.

roger_schank's picture

CEO, Socratic Arts Inc.; John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern University; Author, Make School Meaningful-And Fun!

The Internet has not changed the way I think nor has it changed the way anyone else thinks. Thinking has always been the same. To simplify: the thinking process starts with an expectation or hypothesis; thinking requires one to find (or make up) evidence that explains where that expectation went wrong; and thinking involves deciding upon explanations of one's initial misunderstanding. Thinking is about attempting to understand how an aspect of the world works, and the process hasn't changed since caveman times. The important questions in this process are these: What constitutes evidence? How do you find it? How do you know if what you found is true? We construct explanations based on the evidence we have found.

This process was in place long before the Internet existed. Thinking hasn't changed. What has changed is how we find evidence, how we interpret the evidence we have found, and how we find available explanations from which to choose.

I went into AI to deal with exactly this issue. I was irritated that people would argue about what was true. They would get into fights about Babe Ruth's lifetime batting average. That doesn't happen much any more. Someone can quickly find it. Argument over.

Finding evidence and interpreting evidence has not, unfortunately, changed that much either. At first glance, we might think that the Internet has radically changed the way look for and accept evidence. And, I am sure this is true for the intellectuals who writeÂEdgeÂresponse essays. I am able to find evidence more quickly, to find explanations that others have offered more easily. I can think about a complex issue with more information and with the help of others who have thought about that issue before. Of course, I could always do this in a University environment, but now I can do it while sitting at home, and I can do it more quickly. This is nice, but less important than people realize.

Throughout human history, evidence to help thinking has been gathered by consulting others, typically the village elder who might very well have gotten his knowledge by talking to a puff of smoke. Today, people make decisions based on evidence that they get from the Internet all right, but that evidence often is no better than the evidence the village elder may have supplied. In fact, that evidence may well have been posted by the modern day version of the village elder.

The intelligentsia may well be getting smarter because they have easy access to a wider range of good thinking, but the rest of the world may easily be getting dumber because they have easy access to nonsense.

I don't believe the Internet has changed the way I or anyone else thinks. It has changed the arbiters of truth however. Now everyone is an expert.

irene_pepperberg's picture

Research Associate & Lecturer, Harvard; Author, Alex & Me

The Internet hasn't changed the way I think; it hasn't altered one whit the way in which I â€" that is, my brainâ€"processes information…other than maybe by forcing me to figure out how to process a lot more of it. Consciously, I still use the same scientific training that was drummed into me as an undergraduate and graduate student in theoretical chemistry, even when it comes to evaluating aspects of my daily life: Based on a certain preliminary amount of information, I develop a hypothesis and try to refine it so that it differs from any competing equally plausible hypotheses; I test the hypothesis; if it is proven true, I rest my case within the limits of that hypothesis, accepting that I may have solved only one piece of a puzzle; if it is proven false, I revise and repeat the procedure.

Maybe the Internet has given me more things to think about, but that doesn't fundamentally change the way I think. Rather, what has changed, and is still changing, is my relationship with the Internet â€" from unabashed infatuation to disillusionment to a kind of armed truce. And, no, I'm not sidestepping the question, because until the Internet actually rewires my brain, it won't change my processing abilities. Of course, such rewiring may be in the offing, and quite possibly sooner than we expect, but that's not yet the case.

So, my changing love-hate relationship with the Internet.

First came the honeymoon phase â€" believing that nothing in the world could ever be as wondrous â€" an appreciation for all the incredible richness and simplicity that the Internet brought into my life. No longer did I have to trudge through winter's snow or summer's heat to a library at the other end of campus â€" or even come to campus â€" to acquire information, or to make connections to friends and colleagues all over the world.

Did I need to set up a symposium for an international congress? Just a few emails and all was complete. Did I need an obscure reference or that last bit of data for the next day's powerpoint presentation while in an airport lounge, whether in Berlin or Beijing, Sydney or Saltzburg? Ditto. Did I need a colleague's input on a tricky problem or to provide the same service myself? Ditto. Even when it came to forgetting a birthday or anniversary and needing to research and send a gift somewhere in the world? Ditto. A close friend and colleague moves to Australia? No problem staying in touch anymore. But did all this change the way I think? No. It may have changed the way I work, because what changed were various limitations on the types of information that were accessible within certain logistical boundaries, but my actual thought processes didn't alter.

Next came the disenchantment phase…the realization that more and faster were not always better. My relationship with the Internet began to feel oppressive, overly demanding of my time and energy. Just because I can be available and can work 24/7, 365 â€" must I?? The time saved and the efficiencies achieved began to backfire. I no longer had the luxury of recharging my brain by observing nature during that walk to the library, or by reading a novel while at that airport lounge.

Emails that supplanted telephone calls were sometimes misunderstood, because vocal modulations were missing. The number of requests to do X, Y, or Z began to increase exponentially, because, for example, it was far easier to shoot me a question than to spend the time digging up the answers â€" even on the Internet. The lit search I performed on the supposedly infinitely large data base failed to bring up that reference I needed and knew existed, because I read it a decade ago but didn't save it for my files because I figured I could always bring it up again.

This Internet relationship was supposed to enable all of my needs to be met; how did it instead become the source of endless demands? How did it end up draining away so much time and energy? The Internet seemed to have given me a case of Attention Deficit Disorder, but did it really change the way I think, or just made it more difficult have the time to think? Most likely the latter, because judicious use of the "off" button allowed a return to normalcy.

Which brings me to that armed truce â€" .an attempt to appreciate the positives and accept the negatives, to set personal boundaries and to refuse to let them be breached. Of course, maybe it is just this dogmatic approach that prevents the Internet from changing the way that I think.

howard_rheingold's picture

Communications Expert; Author, Smart Mobs

Digital media and networks can only empower the people who learn how to use them â€" and pose dangers to those who don't know what they are doing. Yes, it's easy to drift into distraction, fall for misinformation, allow attention to fragment rather than focus, but those mental temptations pose dangers only for the untrained mind. Learning the mental discipline to use thinking tools without losing focus is one of the prices I am glad to pay to gain what the Web has to offer.

Those people who do not gain fundamental literacies of attention, crap detection, participation, collaboration, and network awareness are in danger of all the pitfalls critics point out â€" shallowness, credulity, distraction, alienation, addiction. I worry about the billions of people who are gaining access to the Net without the slightest clue about how to find knowledge and verify it for accuracy, how to advocate and participate rather than passively consume, how to discipline and deploy attention in an always-on milieu, how and why to use those privacy protections that remain available in an increasingly intrusive environment.

I have concluded that the realities of my own life as a professional writer â€" if the words didn't go out, the money didn't come in â€" drove me to evolve a set of methods and disciplines. I know that others have mastered far beyond my own practice the mental habits that I've stumbled upon, and I suspect that learning these skills is less difficult than learning long division. I urge researchers and educators to look more systematically where I'm pointing.

When I started out as a freelance writer in the 1970s, my most important tools were a library card, a typewriter, a notebook, and a telephone. In the early 1980s, I became interested in the people at Xerox Palo Alto Research Center who were using computers to edit text without physically cutting, pasting, and retyping pages.

Through PARC I discovered Douglas Engelbart, who had spent the first decade of his career trying to convince somebody, anybody, that using computers to augment human intellect was not a crazy idea. Engelbart set out in the early 1960s to demonstrate that computers could be used to automate low-level cognitive support tasks like cutting, pasting, revising text, and also to enable intellectual tools like the hyperlink that weren't possible with Gutenberg-era technology.

He was convinced that this new way to use computers could lead to "increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble." Important caveats and unpredicted side-effects notwithstanding, Engelbart's forecasts have come to pass in ways that surprised him. What did not surprise him was the importance of both the know-how and how-to-know that unlock the opportunities afforded by augmentation technology.

From the beginning, Engelbart emphasized that the hardware and software created at his Stanford Research Institute laboratory, from the mouse to the hyperlink to the word processor, were part of a system that included "humans, language, artifacts, methodology and training." Long before the Web came along, Engelbart was frustrated that so much progress had been made in the capabilities of the artifacts, but so little study had been devoted to advancing the language, methodology and training â€" the literacies that necessarily accompany the technical capabilities

Attention is the fundamental literacy. Every second I spend online, I make decisions about where to spend my attention. Should I devote any mindshare at all to this comment or that headline? â€" a question I need to answer each time an attractive link catches my eye. Simply becoming aware of the fact that life online requires this kind of decision-making was my first step in learning to tune a fundamental filter on what I allow into my head â€" a filter that is under my control only if I practice controlling it. The second level of decision-making is whether I want to open a tab on my browser because I decided that this item will be worth my time tomorrow. The third decision: do I bookmark this site because I am interested in the subject and might want to reference it at some unspecified future time? Online attention-taming begins with what meditators call "mindfulness" â€" the simple, self-influencing awareness of how attention wanders.

Life online is not solitary. It's social. When I tag and bookmark a Website, a video, an image, I make my decisions visible to others. I take advantage of similar knowledge curation undertaken by others when I start learning a topic by exploring bookmarks, find an image to communicate an idea by searching for a tag. Knowledge sharing and collective action involve collaborative literacies.

Crap detection â€" Hemingway's name for what digital librarians call credibility assessment â€" is another essential literacy. If all schoolchildren could learn one skill before they go online for the first time, I think it should be the ability to find the answer to any question and the skills necessary to determine whether the answer is accurate or not.

Network awareness, from the strength of weak ties and the nature of small-world networks to the power of publics and the how and why of changing Facebook privacy settings, would be the next literacy I would teach, after crap detection. Networks aren't magic, and knowing the principles by which they operate confers power on the knowledgeable. How could people NOT use the Internet in muddled, frazzled, fractured ways when hardly anybody instructs anybody else about how to use the Net salubriously? It is inevitable that people will use the Net in ways that influence how they think and what they think.

It is not inevitable that these influences will be destructive. The health of the online commons will depend on whether more than a tiny minority of Net users become literate Netizens.

peter_schwartz's picture

Futurist; Senior Vice President for Global Government Relations and Strategic Planning, Salesforce.com; Author, Inevitable Surprises

In 1973, just as I was starting work at Stanford Research Institute I had the good fortune to be one of the earliest users of what was then known as the ARPANET. Collaborative work at a distance was the goal of the experiment that led to the suitcase sized TI Silent 700 portable terminal with an acoustic coupler, and thermal printer on the back (no screen) sitting on my desk at home in Palo Alto. I was writing scenarios for the future of the State of Washington with the staff of Governor Dan Evans in Olympia. It was the beginning of the redistribution of my sense of identity.

In the 1980s I was also a participant in the WELL one of the first meaningful on-line communities. Nearly everyone who was part of the WELL had this sense of a very rich set of multiple perceptions constantly and instantly accessible. And not because the Deadheads were a large part of that community my sense of an aware distributed consciousness began to develop.

And finally with the coming of the modern Internet, the World Wide Web and the incredible explosion of knowledge access another level in transformation took hold. I am one of those people who used to read encyclopedias and almanacs. I just wanted to know more, actually, everything. I also make my living, researching, writing, speaking and consulting. Depth, breadth and richness of knowledge are what make it work in my passions and my profession. Before the Internet that was limited by the boundaries of my brain. Now there is a near infinite pool of accessible information that becomes my knowledge in a heartbeat measured in bits/sec. For those of us who wallow in the world of knowledge for pleasure and profit the Internet has become a vast extension of our potential selves.

The modern Internet has achieved much of what Ted Nelson articulated decades ago in his vision of the Xanadu project or Doug Englebart in his human augmentation vision at SRI. Nearly all useful knowledge is now accessible instantaneously from much of the world. Our effective personal memories are now vastly larger, essentially infinite. Our identity is embedded in what we know. And how I think is an expression of that identity. For me the Internet has led over time to that deep sense of collaboration, awareness and ubiquitous knowledge that means that my thought processes are not bound by the meat machine that is my brain, nor my locality nor my time.

clifford_pickover's picture

Author, The Math Book, The Physics Book, and The Medical Book trilogy

With increasing frequency, people around the globe seek advice and social support from other individuals connected via the Internet. Our minds arise not only from our own brains but from Internet prosthetic brains (IPBs) â€" those clusters of people with whom we share information and advice through electronic networks. The simple notion of you and me is changing. For example, I rely on others to help me reason beyond the limits of my own intuition and abilities. Many of my decisions in life are shaped by my IPBs around the globe, and these decisions range from advice on software, computer problems, health issues, and emotional concerns. Thus, when asked to make a decision, who is the me who is actually making that decision?

The IPBs generated by social network connectivity can be more important than the communities dependent on geographic locality. Through the IPBs, we exchange parts of minds with one another. By the information we post on the Web and the interactions we have, we become IPBs for others. In some ways, when we die physically, a part of us survives as an IPB in the memories and thoughts of others, but also as trails we leave on the Internet. Individuals who participate in social groups, blogs, and Twitter, and who deposit their writings on the Web leave behind particles of themselves. Before the Internet, most of us rarely left marks on the world, except on our immediate family or a few friends. Before the Internet, even your immediate family knew nothing of you within four generations. In the "old days," your great-grandchildren might have carried some vestigial memory of you, but that faded like a burning ember when they died â€" and you would have often been extinguished and forgotten. I know nothing about my great grandparents.

However, in the Internet Age, the "complete extinguishing" never really happens, especially for prominent or prolific users. For example, the number of Internet searchers for something you wrote may asymptotically approach zero over the decades, but it will never quite reach zero. Given the ubiquity of the Internet, its databases, and search engines, someone a hundred years from now may smile on something you wrote or wonder about who you were. You may become part of this future person's own IPB as he navigates through life. In the future, simulacrums of you, derived in part by your Internet activities, will be able to converse with future generations.

Moreover, studies show that individuals within your social network have a profound influence on your personal health and happiness, for example, through your contacts on the Internet (whom you usually know) and their friends (whom you may not know). Habits and ideas spread through a vast Web of interconnectivity, like a virus. Behaviors can sometimes skip links â€" spreading to a friend of a friend without affecting the person who connects them. In summary, in the age of the Internet, the concept of you and personhood is more diffuse than ever before.

Because your interests, decision-making capabilities, habits, and even health are so intertwined with others, your personhood is better defined as a pseudo-personhood that is composed of yourself and the assembly of your IPBs out to at least three degrees of network separation. When we die, the Web of interconnectivity becomes torn, but one's pseudo-personhood, in some sense, continues to spread, like a soliton wave on a shoreless sea of Internet connections.

When Marc Chagall was asked to explain why he became a painter, he said that a painting was like a window through which he "could have taken flight toward another world." Chagall explored the boundaries between the real and unreal. "Our whole inner world is reality," he once wrote, "perhaps more real still than the apparent world."

As the notion of IPBs and soliton personhood expands, this kind of boundary will become even more blurred. The IPBs become of Chagallian importance and encourage the use of new windows on the world. They foster a different kind of immortality, form of being, and flight.

matt_ridley's picture

Science Writer; Fellow, Royal Society of Literature and the Academy of Medical Sciences; Author,The Evolution of Everything

The Internet is the ultimate mating ground for ideas, the supreme lekking arena for memes. Cultural and intellectual evolution depends on sex just as much as biological evolution does; otherwise it remains a merely vertical transmission system. Sex allows creatures to draw upon mutations that happen anywhere in their species. The Internet allows people to draw upon ideas that occur to anybody in the world. Radio and printing did this too, and so did writing, and before that language, but the Internet has made it fast and furious.

Exchange and specialization are what makes cultural evolution happen, and the Internet's capacity for encouraging exchange encourages specialization too. Somebody somewhere knows the answer to any question I care to ask, and it is much easier to find him or her. Often it is an amateur, outside journalism or academia, who just happens to have a piece of knowledge to hand. An example: suspicious of the claim that warm seas (as opposed to rapidly warming seas) would kill off coral reefs, I surfed the Net till I found the answer to the following question: is there any part of the oceans that is too hot for corals to grow? One answer lay in a blog comment from a diver just back from the Iranian side of the Persian Gulf where he had seen diverse and flourishing coral reefs in 35C water (ten degrees warmer than much of the Great Barrier Reef).

This has changed the way I think about human intelligence. I've never had much time for the academic obsession with intelligence. Highly intelligent people are sometimes remarkably stupid; stupid people sometimes make better leaders than clever ones. And so on. The reason, I realize, is that human intelligence is a collective phenomenon. If they exchange and specialize, a group of 50 dull-witted people can have a far higher collective intelligence than 50 brilliant people who don't. So that's why it is utterly irrelevant if one race turns out to have higher IQ than another, or one company hires people with higher IQs than another. I would rather be marooned on a desert island with a diverse group of mediocre people who know how to communicate, from a singer to a plumber, say, than with a bunch of geniuses.

The Internet is the latest and best expression of the collective nature of human intelligence.

david_g_myers's picture

Professor of Psychology, Hope College; Co-author, Psychology, 11th Edition

I cut my eye teeth in social psychology with experiments on "group polarization" â€" the tendency for face-to-face discussion to amplify group members' preexisting opinions. Never then did I imagine the potential dangers, or the creative possibilities, of polarization in virtual groups.

Electronic communication and social networking enable Tea Partiers, global warming deniers, and conspiracy theorists to isolate themselves and find support for their shared ideas and suspicions. As the Internet connects the like-minded and pools their ideas, White supremacists may become more racist, Obama-despisers more hostile, and militia members more terror prone (thus limiting our power to halt terrorism by conquering a place). In the echo chambers of virtual worlds, as in real worlds, separation + conversation = polarization.

But the Internet-as-social-amplifier can instead work for good, by connecting those coping with challenges. Peacemakers, cancer survivors, and bereaved parents find strength and solace from kindred spirits.

By amplifying shared concerns and ideas, Internet-enhanced communication can also foster social entrepreneurship. An example: As a person with hearing loss, I advocate a simple technology that doubles the functionality of hearing aids, transforming them, with the button push, into wireless loudspeakers. After experiencing this "hearing loop" technology in countless British venues, from cathedrals to post office windows and taxi back seats, I helped introduce it to West Michigan, where it can now be found in several hundred venues, including Grand Rapids' convention center and all gate areas of its airport. Then, via a Website, hearing listservs, and e-mail I networked with fellow hearing advocates and, by feeding each other, our resolve gained strength.

Thanks to the collective efficacy of our virtual community, hearing aid compatible assistive listening has spread to other communities and states. New York City is installing it in 488 subway information booths. Leaders in the American Academy of Audiology and the Hearing Loss Association of America are discussing how to promote this inexpensive, wireless assistive listening. Several state hearing loss associations are recommending it. The hearing industry is now including the needed magnetic receiver in most hearing aids and cochlear implants. And new companies have begun manufacturing and marketing hearing loop systems. Voila!, a grassroots, Internet-fueled transformation in how America provides listening assistance is underway.

The moral: By linking and magnifying the inclinations of kindred-spirited people, the Internet can be very, very bad, but also very, very good.

paul_kedrosky's picture

Editor, Infectious Greed; General Partner, SK Ventures

Three friends have told me recently that during their just-completed holidays they unplugged from the Internet and had big, deep thoughts. This worries me. First, three data points means it's a trend, so maybe I should be doing it. Second, I wonder if I could disconnect from the Internet long enough to have big, deep thoughts. Third, like most people I know, I worry that even if I disconnect long enough, my info-krill-addled brain is no longer capable of big, deep thoughts (which I will henceforth calls BDTs).

Could I quit? At some level it seems a silly question, like asking how I feel about taking a breathing hiatus, or if on Tuesdays I would give up gravity. The Internet no longer feels involuntary when it comes to thinking. Instead, it feels more like the sort of thing that when you make a conscious effort to stop doing it bad things happen. As a kid I once swore off gravity and jumped from a barn hay mow, resulting in a sprained ankle. Similarly, a good friend of mine sometimes asks fellow golfers before a swing whether they breathe in or they breathe out. The next swing is inevitably horrible as the golfer sends a ball screaming into receptive underbrush.

Could I quit the Internet if it meant I would have more BDTs? Sure, I suppose I could, but I'm not convinced it would happen. First, the Internet is, for me, a kind of internal cognition combustion engine, something that vastly accelerates my ability to travel vast landscapes. Without it it would be much more difficult to compare, say, theories about complexity, cell phones and bee colony collapse disorder rather than writing an overdue paper, or to count hotel room in default in California versus Washington state. (In case you're curious, there are roughly two-times as many defaulted hotel rooms in California as there total hotel rooms in Seattle.)

In saying I could quit, but not quitting (even if quitting meant more BDTs), I could be accused of cynicism. I get to tell myself I could quit and have BDTs, without actually testing if or when I did quit whether I had said thoughts. That has a great deal of appeal, not least because I get the frisson of contemplating BDTs without actually going to the trouble of a) giving up the Internet, and b) seeing if I actually have the aforementioned thoughts.

Because like most people I know, I worry noisily and loudly that the Internet has made me incapable of having BDTs. I feel sure that I used to have such things, but for some reason I no longer do. Maybe the Internet has damaged me â€" I've informed myself to death! â€" to the point that I don't know what big, deep thoughts are, or that the brain chemicals formerly responsible for their emergence are now doing something else. Then again, this smacks of historical romanticism, like remembering the skies as always being blue and summers as eternal when you were eight years old.

So, as much as I kind of want to believe people who say they have big, deep thoughts when they disconnect from the web, I don't trust them. It reminds me of a doctor declaring herself/himself Amish for the day, and then heading from New York to Boston by horse & carriage with a hemorrhaging patient. Granted, you could do it, and some patients might even survive, but it isn't prudent or necessary. It seems instead a kind of public exercise in macho symbolism, like Iggy Pop carving something in his chest, a way of bloodily demonstrating that you're different, or even a sign of outright crankishness. Look at me! I'm thinking! No Internet!

If we know anything about knowledge, about innovation, and therefore about coming up with BDTs, it is that it is cumulative, an accretive process of happening upon, connecting, and assembling, like an infinite erector set, not just a few pretty I-beams strewn about on a concrete floor. But if BDTs were just about connecting things then the Internet would only be mildly interesting in changing the way I think. Libraries connect things, people connect things, and connections can even happen, yes, while sitting disconnected from the Internet under an apple tree somewhere. Here is the difference: The difference is that the Internet increases the speed and frequency of these connections & collisions, while dropping the cost of both to near zero.

It is that combination â€" cheap connections plus cheap collisions â€" that has done violence to the way I think. It is like having a private particle accelerator on my desktop, a way of throwing things into violent juxtaposition, and then the resulting collisions reordering my thinking. The result is new particles â€" ideas! â€" some of which are BDTs, and many of which are nonsense. But the democratization of connections, collisions and therefore thinking is historically unprecedented. We are the first generation to have the information equivalent of the Large Hadron Collider for ideas. And if that doesn't change the way you think, nothing will.

seth_lloyd's picture

Professor of Quantum Mechanical Engineering, MIT; Author, Programming the Universe

I think less. My goal is to transfer my brain's functions, bit by bit, to the Cloud.

When I do think, I am lazier. There's no point in making the strenuous trek over to the library to find the source when you can get an expurgated electronic version on Google books right away. And why go look up the exact theorem when you can find an approximate version on Wikipedia?

OK, you can get burned. Math being what it is, an approximate theorem is typically an untrue theorem. Over the years, I have found most statements in purely scientific reference articles on Wikipedia to be 99.44% correct. It's that last .56% that gets you. I just wasted three months and almost published an incorrect result because one clause in the Wikipedia statement of a theorem was, in fact, wrong. It's a lucky thing the referee caught my error. In the meanwhile, however, I had used one of the great Internet innovations, the scientific preprint archive, to post the incorrect result on the Internet for everyone to see.

For hundreds of millions of years, Sex was the most efficient method for propagating information of dubious provenance: the origins of all those snippets of junk DNA are lost in the sands of reproductive history. Move aside, Sex: the world-wide Web has usurped your role. A single illegal download can propagate more parasitic bits of information than a host of mating Tse Tse flies. Indeed, as I looked further afield, I found that it was not just Wikipedia that was in error: essentially every digital statement of the clause in the theorem of interest was also incorrect. For better or worse, it appears that the only sure way to find the correct statement of a theorem is to trek to the library and to find some book written by some dead mathematician, maybe even the same one who proved the theorem in the first place.

In fact, the key to correctness probably does not even lie in the fact that the book was written by that mathematician, so much as that the book was scrupulously edited by the editor of the series who invited the mathematician to write the book. Prose, poetry, and theorems posted on the Internet are no less insightful and brilliant than their paper predecessors: they are simply less edited. Moreover, just when we need them most, the meticulously trained editors of our newspapers, journals, and publishing houses are being laid off in droves.

Life, too, has gone through periods of editorial collapse. During the Cambrian explosion, living systems discovered the evolutionary advantage of complex, multicellular forms. Like the digital organisms of today's Internet, the new Cambrian lifeforms rewrote the rules of habitat after habitat, evolving rapidly in the process. Finally, however, they filled their environment to its carrying capacity: at that point, just being cool, complex, and multicellular was no longer enough to insure survival. The sharp red pencil of natural selection came out and slashed away the gratuitous sequences of DNA.

For the moment, however, the ability of the Internet to propagate information promiscuously is largely a blessing. The preprint archives where scientific work (like my wrong paper) are posted for all to read are great levelers: a second- or third-world scientist with a modem can access the unedited state of the art in a scientific field as it is produced, rather than months or years later. They, in turn, can produce and post their own unedited preprints, and so on. As long as computer memories keep doubling in capacity every year or two, those stacks of unedited information will keep doubling and doubling, too, swamping the useful and correct in a sea of extraneous bits. Eventually, the laws of physics themselves will stop this exponential explosion of memory space, and we will be forced, once more, to edit. What will happen then?

Don't ask me. By then, the full brain transfer to the Cloud should be complete. I hope not to be thinking at all.

tor_n_rretranders's picture

Writer; Speaker; Thinker, Copenhagen, Denmark

The more you give, the more you get. The more you share, the more they care. The more you dare, the more is there for you. Dare, care and share.

The Internet has become the engine of gift economy and cooperation. The simple insight that there is so much more knowledge, data and wisdom out there than I can ever attend in a lifetime, shows me that life is not about collecting information into a depot of books, theorems, rote memories or titles. Life is about sharing with others what you have. Use it, share it, pick it when you need it. There is plenty out there.

In ecology, the waste of one organism is the food of another. Plants produce oxygen as a waste product — animals need it to live. We produce carbon dioxide as waste — and the plants enjoy it. To live is to be able to share your waste.

Human civilization seems to have been forgetting that through centuries of building and isolating waste depots and by exploiting limited resources. Now, we start learning that it is all about flows. Matter, energy, information, social links. They all flow through us. We share them with each other and all other inhabitants of this planet. The climate problem show us what happens if we ignore that renewable flows are the real stuff while depots and fortresses are illusions in the long run.

The Internet makes us think in the right way: Pass it on, let it go, let it flow. Thinking is renewed. Now we only need to change the way we act.

kevin_kelly's picture

Senior Maverick, Wired; Author, What Technology Wants and The Inevitable

We already know that our use of technology changes how our brains work. Reading and writing are cognitive tools that, once acquired, change the way in which the brain processes information. When psychologists use neuroimaging technology, like MRI, to compare the brains of literates and illiterates working on a task, they find many differences, and not just when the subjects are reading.

Researcher Alexandre Castro-Caldas discovered that processing between the hemispheres of the brain was different between those who could read and those who could not. A key part of the corpus callosum was thicker in literates, and "the occipital lobe processed information more slowly in individuals who learned to read as adults compared to those who learned at the usual age." Psychologists Ostrosky-Solis, Garcia and Perez tested literates and illiterates with a battery of cognitive tests while measuring their brain waves and concluded that "the acquisition of reading and writing skills has changed the brain organization of cognitive activity in general is not only in language but also in visual perception, logical reasoning, remembering strategies, and formal operational thinking."

If alphabetic literacy can change how we think, imagine how Internet literacy and 10 hours per day in front of one kind of screen or another is changing our brains. The first generation to grow up screen literate is just reaching adulthood so we don't have any scientific studies of the full consequence of ubiquitous connectivity, but I have a few hunches based on my own behavior.

When I do long division or even multiplication I don't try to remember the intermediate numbers. Long ago I learned to write them down. Because of paper and pencil I am  "smarter" in arithmetic. In a similar manner I now no longer to try remember facts, or even where I found the facts. I have learned to summon them on the Internet. Because the Internet is my new pencil and paper, I am "smarter" in factuality.

But my knowledge is now more fragile. For every accepted piece of knowledge I find, there is within easy reach someone who challenges the fact. Every fact has its anti-fact. The Internet's extreme hyperlinking highlights those anti-facts as brightly as the facts. Some anti-facts are silly, some borderline, and some valid. You can't rely on experts to sort them out because for every expert there is an equal and countervailing anti-expert. Thus anything I learn is subject to erosion by these ubiquitous anti-factors.

My certainty aboutÂanything has decreased. Rather than importing authority, I am reduced to creating my own certainty â€" not just about things I care about â€" but about anything I touch, including areas about which I can't possibly have any direct knowledge . That means that in general I assume more and more that what I know is wrong. We might consider this state perfect for science but it also means that I am more likely to have my mind changed for incorrect reasons. Nonetheless, the embrace of uncertainty is one way my thinking has changed.

Uncertainty is a kind of liquidity. I think my thinking has become more liquid. It is less fixed, as text in a book might be, and more fluid, as say text in Wikipedia might be. My opinions shift more. My interests rise and fall more quickly. I am less interested in Truth, with a capital T, and more interested in truths, plural. I feel the subjective has an important role in assembling the objective from many data points. The incremental plodding progress of imperfect science seems the only way to know anything.

While hooked into the network of networks I feel like I am a network myself, trying to achieve reliability from unreliable parts. And in my quest to assemble truths from half-truths, non-truths, and some other truths scattered in the flux (this creation of the known is now our job and not the job of authorities), I find my mind attracted to fluid ways of thinking (scenarios, provisional belief) and fluid media like mashups, twitter, and search. But as I flow through this slippery Web of ideas, it often feels like a waking dream.

We don't really know what dreams are for, only that they satisfy some fundamental need. Someone watching me surf the Web, as I jump from one suggested link to another, would see a day-dream. Today, I was in a crowd of people who watched a barefoot man eat dirt, then the face of a boy who was singing began to melt, then Santa burned a Christmas tree, then I was floating inside mud house on the very tippy top of the world, then Celtic knots untied themselves, then a guy told me the formula for making clear glass, then I was watching myself, back in high school, riding a bicycle. And that was just the first few minutes of my day on the Web this morning. The trance-like state we fall into while following the undirected path of links may be a terrible waste of time, or like dreams, it might be a productive waste of time. Perhaps we are tapping into our collective unconscious in a way watching the directed stream of TV, radio and newspapers could not. Maybe click-dreaming is a way for all of us to have the same dream, independent of what we click on.

This waking dream we call the Internet also blurs the difference between my serious thoughts and my playful thoughts, or to put it more simply: I no longer can tell when I am working and when I am playing online. For some people the disintegration between these two realms marks all that is wrong with the Internet: It  is the high-priced waster of time. It breeds trifles. On the contrary, I cherish a good wasting of time as a necessary precondition for creativity, but more importantly I believe the conflation of play and work, of thinking hard and thinking playfully, is one the greatest things the Internet has done.

In fact the propensity of the Internet to diminish our attention is overrated. I do find that smaller and smaller bits of information can command the full attention of my over-educated mind. And not just me; everyone reports succumbing to the lure of fast, tiny, interruptions of information. In response to this incessant barrage of bits, the culture of the Internet has been busy unbundling larger works into minor snippets for sale. Music albums are chopped up and sold as songs; movies become trailers, or even smaller video snips. (I find that many trailers reallyÂare better than their movie.) Newspapers become twitter posts. Scientific papers are served up in snippets on Google. I happily swim in this rising ocean of fragments.

While I rush into the Net to hunt for these tidbits, or to surf on its lucid dream, I've noticed a different approach to my thinking. My thinking is more active, less contemplative. Rather than begin a question or hunch by ruminating aimlessly in my mind, nourished only by my ignorance, I start doing things. I immediately, instantly go.

I go looking, searching, asking, questioning, reacting to data, leaping in, constructing notes, bookmarks, a trail, a start of making something mine. I don't wait. Don't have to wait. I act on ideas first now instead of thinking on them.   For some folks, this is the worst of the Net â€" the loss of contemplation. Others feel that all this frothy activity is simply stupid busy work, or spinning of wheels, or illusionary action. I think to myself, compared to what?

Compared to the passive consumption of TV or sucking up bully newspapers, or of merely sitting at home going in circles musing about stuff in my head without any new inputs, I find myself much more productive by acting first. The emergence of blogs and Wikipedia are expressions of this same impulse, to act (write) first and think (filter) later. I have a picture of the hundreds of millions people online at this very minute. To my eye they are not wasting time with silly associative links, but are engaged in a more productive way of thinking then the equivalent hundred of millions people were 50 years ago.

This approach does encourage tiny bits, but surprisingly at the very same time, it also allows us to give more attention to works that are far more complex, bigger, and more complicated than ever before. These new creations contain more data, require more attention over longer periods; and these works are more successful as the Internet expands. This parallel trend is less visible at first because of a common short sightedness that equates the Internet with text.

To a first approximation the Internet is words on a screen â€" Google, papers, blogs. But this first glance ignores the vastly larger underbelly of the Internet â€" moving images on a screen. People (and not just young kids) no longer go to books and text first. If people have a question they (myself included) head first for YouTube. For fun we go to online massive games, or catch streaming movies, including factual videos (documentaries are in a renaissance). New visual media are stampeding onto the Nets. This is where the Internet's center of attention lies, not in text alone. Because of online fans, and streaming on demand, and rewinding at will, and all the other liquid abilities of the Internet, directors started creating movies that were more than 100 hours long.

These vast epics likeÂLost andÂThe Wire had multiple interweaving plot lines, multiple protagonists, an incredible depth of characters and demanded sustained attention that was not only beyond previous TV and 90-minute movies, but would have shocked Dickens and other novelists of yore. They would marvel: "You mean they could follow all that, and then want more? Over how many years?" I would never have believed myself capable of enjoying such complicated stories, or caring about them to put in the time. My attention has grown. In a similar way the depth, complexity and demands of games can equal these marathon movies, or any great book.

But the most important way the Internet has changed the direction of my attention, and thus my thinking, is that it has become one thing. It may look like I am spending endless nano-seconds on a series of tweets, and endless microseconds surfing between Web pages, or wandering between channels, and hovering only mere minutes on one book snippet after another; but in reality I am spending 10 hours a day paying attention to the Internet. I return to it after a few minutes, day after day, with essentially my full-time attention. As do you.

We are developing an intense, sustained conversation with this large thing. The fact that it is made up of a million loosely connected pieces is distracting us. The producers of Websites, and the hordes of commenters online, and the movie moguls reluctantly letting us stream their movies, don't believe they are mere pixels in a big global show, but they are. It is one thing now, an intermedia with 2 billion screens peering into it. The whole ball of connections â€" including all its books, all its pages, all its tweets, all its movies, all its games, all its posts, all its streams â€" is like one vast global book (or movie, etc.), and we are only beginning to learn how to read it. Knowing that this large thing is there, and that I am in constant communication with it, has changed how I think.

gary_marcus's picture

Professor of Psychology, Director NYU Center for Language and Music; Author, Guitar Zero

I am not sure the Internet has changed the way we think so much as the way we act. Information has become cheap, and spend more time on-line than in libraries, but there's been no biological evolution: human brains remain human brains, with a finite capacity for absorbing information and host of cognitive biases that impair our judgements. People have vastly more information at their disposal now, but it doesn't mean they know how to use that information wisely. Teenagers, for example, often gauge the reliability of a Website by how slick a site is, rather than on the nature of the site's sources.

My suggestion? Let us use the Internet as an impetus for completely rebooting our educational system, reorienting it from its current but antiquated 18th century emphasis on memorization â€" pointless in the age of Wikipedia â€" to a more modern emphasis on critical thinking skills, on metacognition and decision-making. Instead of teaching kids mere facts we should be teaching children how to reason, reflect, plan, investigate and evaluate.

If we can do that, then (and perhaps only then) we might truly change how people think.

hans_ulrich_obrist's picture

Curator, Serpentine Gallery, London; Editor: A Brief History of Curating; Formulas for Now; Co-author (with Rem Koolhas), Project Japan: Metabolism Talks

A is for And And
The Internet made me think more BOTH AND instead of EITHER OR instead of NOR NOR.

B is for Beginnings
In terms of my curatorial thinking, my 'Eureka moments' occurred pre-Internet, when I met visionary Swiss artists Fischli/Weiss in 1985. These conversations freed me up — freed my thoughts as to what curating could be and how curating can produce reality. The arrival of the Internet was a trigger for me to think more in the form of Oulipian lists —practical-poetical, evolutive and often nonlinear, lists. This A to Z is an incomplete list ….Umberto Eco calls the World Wide Web the 'mother of all lists': infinite by definition and in constant evolution.

C is for Curating the World
The Internet made me think towards a more expanded notion of curating. Stemming from the Latin word 'curare', the word 'curating' originally meant 'to take care of objects in museums'. Curation has long since evolved. Just as art is no longer limited to traditional genres, curating is no longer confined to the gallery or museum but has expanded across all boundaries. The rather obscure and very specialized notion of curating has become much more publicly used since one talks about curating of Websites and and this marks a very good moment to rediscover the pioneering history of art curating as a toolbox for 21st century society at large.

D is for Delinking
In the years before being online, I remember that there were many interruptions by phone and fax day and night. The reality of being permanently linked to the triggered my increasing awareness of the importance of moments of concentration —moments without interruption that require me to be completely unreachable. I no longer answer the phone at home and I only answer my mobile phone in the case of fixed telephone appointments. To link is beautiful. To delink is sublime. (Paul Chan)

D is for Disrupted narrative continuity
Forms of film montage , as the disruption of narrative and the disruption of spatial and temporal continuity, have been a staple tactic of the avant-garde from Cubism and Eisenstein, through Brecht to Kluge or Godard. For avant-gardism as a whole, it was essential that these tactics were recognized (experienced) as a disruption. The Internet has made disruption and montage the operative bases of everyday experience. Today, these forms of disruption can be harnessed and poeticized. They can foster new connections, new relationships, new productions of reality: reality as life-montage / life as reality-disruption? Not one story but many stories....

D is for Doubt
A certain unreliability of technical and material information on the Internet brings us to the notion of doubt. I feel that doubt has become more pervasive. The artist Carsten Höller has invented the Laboratory of Doubt, which is opposed to mere representation. As he has told me, 'Doubt and perplexity ... are unsightly states of mind we'd rather keep under lock and key because we associate them with uneasiness, with a failure of values'. Höller's credo is not to do; not to intervene. To exist is to do and not to do is a way of doing. 'Doubt is alive; it paralyzes certainty.' (Carsten Höller)

E is for Evolutive exhibitions
The Internet makes me think more about non-final exhibitions and exhibitions in a state of becoming. When conceiving exhibitions, I sometimes like to think of randomized algorithms, access, transmission, mutation, infiltration and circulation (the list goes on). The Internet makes me think less of exhibitions as top down masterplans but bottom up processes of self organisation like do it or Cities on the Move

F is for Forgetting
The ever growing ever pervasive records that the Internet produces make me think sometimes about the virtues of forgetting. Is a limited life space of certain information and data becoming more urgent?

H is for Handwriting (and Drawing ever Drawing)
The Internet has made me aware of the importance of handwriting and drawing. Personally, I typed all my early texts, but the more the Internet has become all-encompassing , the more I have felt that something went missing. Hence the idea to reintroduce handwriting.I do more and more of my correspondence as handwritten letters scanned and sent by email. On a professional note, I observe, as a curator, the importance of drawing in current art production. One can also see it in art schools: a moment when drawing is an incredibly fertile zone.

I is for Identity
"Identity is shifty, identity is a choice". (Etel Adnan)

I is for Inactual considerations
The future is always built out of fragments of the past. The Internet has brought thinking more into the present tense, raising questions of what it means to be contemporary.

Recently, Giorgio Agamben revisited Nietzsche's 'Inactual Considerations', arguing that the one who belongs to his or her own time is the one who does not coincide perfectly with it. It is because of this shift, this anachronism, that he or she is more apt than others to perceive and to catch his or her time. Agamben follows this observation with his second definition of contemporaneity: the contemporary is the one who is able to perceive obscurity, who is not blinded by the lights of his or her time or century.

This leads us, interestingly enough, to the importance of astrophysics in explaining the relevance of obscurity for contemporaneity. The seeming obscurity in the sky is the light that travels to us at full speed but which can't reach us because the galaxies from which it originates are ceaselessly moving away from us at a speed superior to that of light. The Internet and a certain resistance to its present tense have made me increasingly aware that there is an urgent call to be contemporary. To be contemporary means to perpetually come back to a present where we have never yet been. To be contemporary means to resist the homogenization of time, through ruptures and discontinuities.

M is for Maps
The Internet increased the presence of maps in my thinking. It's become easier to make maps, to change them, and also to work on them collaboratively and collectively and share them (e.g. Google Maps and Google Earth). After the focus on social networks of the last couple of years, I have come to see the focus on location as a key dimension.

N is for New geographies
The Internet has fuelled (and been fuelled by) a relentless economic and cultural globalization, with all its positive and negative aspects. On the one hand, there is the danger of homogenizing forces, which is also at stake in the world of the arts. On the other hand, there are unprecedented possibilities for difference enhancing global dialogues. In thelong durationthere have been seismic shifts, like that in the 16th century when the paradigm shifted from the Mediterranean to the Atlantic. We are living through a period in which the center of gravity is transferring to new centres. . The early 21st century is seeing the growth of a polyphony of art centers in the East and West in the North and South.

N is for Non-mediated experiences N is for the New Live
I feel an increased desire for non-mediated experiences Depending on one's point of view, the virtual may be a new and liberating prosthesis of the body or it may threaten the body. Many visual artists today negotiate and mediate between these two staging encounters of non mediated intersubjectivity. In the music fields the crisis of the record industry goes hand in hand with an increased importance of live concerts.

P is for Parallel realities
The Internet creates and fosters new constituencies; new micro-communities. As a system that infinitely breeds new realities, it is predisposed to reproduce itself in a proliferating series of ever more functionally differentiated subsystems. As such, it makes my thinking go towards the production of parallel realities, bearing witness to the multiverse, as the physicist David Deutsch might say and for better or worse, the Internet allows that which is already latent in the fabric of reality to unravel itself and expand in all directions.

P is for Protest against forgetting
Over the last years I feel an increasing urgency to more and more interviews, to make an effort to preserve traces of intelligence from the last decades. One particularly urgent part of this are the testimonies of the 20th century pioneers who are in their 80s or 90s or older and whom I regularly interview, testimonies of a century from those who are not online and who very often fall into oblivion. This protest might, as Rem Koolhaas has told me, act as 'a hedge against the systematic forgetting that hides at the core of the information age and which may in fact be its secret agenda'?

S is for Salon of the 21st century
The Internet has made me think more about whom I would like to introduce to whom; to cyberintroduce people as a daily practice or to introduce people in person through actual salons for the 21st century (see the Brutally Early Club).

Last but not least a the response of David Weiss who answers this yearsEdge question with a new question asking if our thinking can influence the Internet.

jon_kleinberg's picture

Tisch University Professor of Computer Science, Cornell University

When Rio de Janeiro was announced as the site of the 2016 Summer Olympics, I was on the phone with colleagues, talking about some ideas for how to track breaking news on the Internet. Curious to see how reactions to the announcement were playing out, we went onto the Web to take a look, pushing our way like tourists into the midst of a celebration that was already well underway. The sense that we were surrounded by crowds was not entirely in our imaginations: over a thousand tweets per minute about Rio were appearing on Twitter; Wikipedians were posting continuous updates to their "2016 Summer Olympics" page; and political blogs were filled with active conversations about the lobbying of world leaders on behalf of different cities.

This is the shape that current events take on-line, and there is something more going on here than simple volume. Until recently, information about an event like this would have been disseminated according to a top-down structure, consisting of an editorially assembled sampling of summaries of the official announcement, reports of selected reactions, and stories of crowds gathering at the scene. But now the information emerges bottom-up, converging in tiny pieces from all directions: the crowd itself speaks, in a million distinct voices â€" a deluge of different perspectives.

The Web hasn't always looked this way. When I first used an Internet search engine in the early 1990s, I imagined myself dipping into a vast, universal library, a museum vault filled with accumulated knowledge. The fact that I shared this museum vault with other visitors was something that I knew in principle, but could not directly perceive â€" we had the tools to engage with the information but not with one another, and so we all passed invisibly by each other.

When I go on-line today, all those rooms and hallways are teeming, and I can see it. What strikes me is the human texture of the information â€" the visible conversations, the spikes and bursts of text, the controlled graffiti of tagging and commenting. I've come to appreciate the way the event and the crowd in fact live in symbiosis, each dependent on the other â€" the people all talking at once about the event, but the event only fully comprehensible as the sum total of the human reaction to it. The construction feels literary in its complexity â€" a scene as though described by an omniscient narrator, jumping between different points of view, except that here all these voices belong to real, living beings, and there's no master narrative coordinating them. The cacophony might make sense, and it might not.

But the complexity does not just arise from all the human voices â€" it is accentuated by the fact that the online world is one where human beings and computational creations commingle. You bump into these computational artifacts like strange characters in a Carrollian Wonderland. There is the giant creature who has memorized everything ever written, and will repeat excerpts back to you (mainly out of context) in response to your questions. There are the diaphanous forms, barely visible at the right-hand edge of your field of vision, who listen mutely as you cancel meetings and talk about staying home in bed, and then mysteriously begin slipping you ads for cough medicine and pain relievers. And even more exotic characters are on the way; a whole industry works tirelessly to develop them.

The ads for cough medicine are important, and not just because they're part of what pays for the whole operation. They should continuously remind you that you're part of the giant crowd as well, that everything you do is feeding into a global conversation that is not only visible but recorded. I try to reflect on what behavioral targeting algorithms must think of me â€" what the mosaic of my actions must look like when everything is taken into account, and which pieces of that mosaic would have been better left off the table.

The complexity of the online world means that when I use the Internet today, even for the most mundane of purposes, I find myself drawing on skills that I first learned in doing research â€" evaluating many different observations and interpretations of the same events; asking how people's underlying perspectives, tools, and ways of behaving have served to shape their interpretations; and reflecting on my own decisions as part of this process. Think about the cognitive demands this activity involves â€" once the domain of scholarship, it is now something that the Internet requires from us on a daily basis. It suggests that in addition to "computer literacy," an old pursuit where we teach novices how to use computing technology in a purely operational sense, we need to be conveying the much more complex skill of "information literacy" at very young ages: how to reason about the swirl of perspectives you find when you consume information on-line, how to understand and harness the computational forces that shape this information, and how to reason about the subtle consequences of your own actions on the Internet.

Finally, the Internet has changed how I think professionally, as a computer scientist. In the thirteen years since I finished graduate school, the Internet has steadily and incontrovertibly advanced the argument that computer science is not just about technology but about human beings as well â€" about the power of human beings to collectively create knowledge and engage in self-expression on a global scale. This has been a thrilling development, and one that points to a new phase in our understanding of what people and technology can accomplish together, and about the world we've grown to jointly inhabit.

lynn_margulis's picture

By using the Internet I have renewed or begun new epistolary interactions on a global basis with superb, knowledgeable scientists and historians. The Internet has made quickly available much obscure, scientific literature relevant and invaluable to me. It has generated new colleagues. The luxury (far beyond the usual "he says, she says, they-say gossip") of the Internet leads us (both nearby and geographically distant associates: graduate students, family members, et al.) towards the answer to a key question about the grand sweep of the history of life in its biospheric environment on Planet Earth. (Note: of course our planet is mostly not earth, it ought to be renamed Planet Water or Planet Hard Rock.)

The Internet makes a difference as we zero in toward the final detailed solution of our scientific problem: "How did the ancestral nucleated cell evolve some 1000 million years ago?" (The cells of which all animals, plants, mushrooms and algae etc. are composed.) Everyone agrees this evolutionary turning point, the appearance of animal-type cells in the fossil record happened in the time period the geologists call the Proterozoic Eon)? How?

The short answer is nucleated cells evolved "by promiscuous forbidden sexual fusion among wildly different kinds of bacteria." Alas, our motley collection of fused bacterial ancestors never escaped from their "marriage contract". They survived and still live together with the ups-and-downs of permanent merger.

Probably some bacterial ancestors look back at the period 1000-600 million years ago when both water and air were full of hydrogen sulfide (poisonous to people). Before oxygen bubbled up and its combustion fueled the frenetic rate of environmental degradation that began in the Proterozoic eon and continues until today was "The Age of Bacteria", a calmer, quieter time. Aided and abetted by our very recent (Holocene) loud, careless, ignorant, frantic, clever but unwise, ephemeral human species, the rest of our planetmates have been there before us and will be there when we're gone.

The Internet pushes this notion farther, louder and of course with the velocity of light.

james_j_odonnell's picture

Classics Scholar, University Librarian, ASU; Author, Pagans

How is the Internet changing the way I think? My fingers have become part of my brain. What will come of this? It's far too early to say.

Once upon a time, knowledge consisted of what you knew yourself and what you heard â€" literally, with your ears â€" from others. If you were asked a question in those days you thought of what you had seen and heard and done yourself and what others had said to you. I'm rereading Thucydides this winter and watching the way everything depended on who you knew and where the messengers came from and whether they were delayed en route, walking from one end of Greece to another. Thucydides was literate, but his world hadn't absorbed that new technology yet.

With the invention of writing, the eyes took on a new role. Knowledge wasn't all in memory, but was found in present, visual stimuli: the written word in one form or another. We have built a might culture based on all the things that humankind can produce and the eye can study. What we could read in the traditional library of 25 years ago was orders of magnitude richer and more diverse than the most that any person could ever see, hear, or be told of in one lifetime. The modern correlative to Thucydides would be Churchill's history of World War II and the abundance of written documents he shows himself dependent on at every stage of the war. But imagine Churchill or Hitler with Internet-like access to information!

Now we change again. It's less than twenty years since the living presence of networked information has become part of our thinking machinery. What it will mean to us that vastly more people have nearly instantaneous access to vastly greater quantities of information cannot be said with confidence. In principle, it means a democratization of innovation and of debate. In practice, it also means a world in which many have already proven that they can ignore what they do not wish to think about, select what they wish to quote, and produce a public discourse demonstrably poorer than what we might have known in the past.

But just for myself, just for now, it's my fingers I notice. Ask me a good question today, and I find that I begin fiddling. If I am away from my desk, I pull out my Blackberry so quickly and instinctively that you probably think I'm ignoring your question and starting to read my e-mail or play Brickbreaker â€" and sometimes I am! But when I'm not â€" that is, when you've asked a really interesting question, it's a physical reaction, a gut feeling that I need to start manipulating (the Latin root for 'hand', *manus*, is in that word) the information at my fingertips, in order to find the data that will support a good answer. At my desktop, it's the same pattern: the sign of thinking is that I reach for the mouse and start "shaking it loose" â€" the circular pattern on the mouse pad that lets me see where the mouse arrow is, make sure the right browser is open, get a search window handy. My eyes and hands have already learned to work together in new ways with my brain in a process of clicking, typing a couple of words, clicking, scanning, clicking again that really is a new way of thinking for me.

That finger work is unconscious. It just starts to happen. But it's the way I can now tell thinking has begun as I begin working my way through an information world more tactile than ever before. Will we next have three-dimensional virtual spaces in which I gesture, touch, and run my fingers over the data? I don't know: nobody can. But we're off on a new and great adventure whose costs and benefits we will only slowly come to appreciate.

What all this means is that we are in a different space now, one that is largely unfamiliar to us even when we think we are using familiar tools (like a "newspaper" that has never been printed or an "encyclopedia" vastly larger than any shelf of buckram volumes), and one that has begun life by going through rapid changes that only hint at what is to come. I'm not going to prophesy where that goes, but I'll sit here a while longer, watching the ways I really have come to "let my fingers do the walking", wondering where they will lead.

brian_knutson's picture

Professor of Psychology and Neuroscience; Stanford University

Like it or not, I have to admit that the Internet has changed both what and how I think.

Consider the obvious yet still remarkable fact that I spend at least 50% of my waking hours on the Internet, compared to 0% of my time 25 years ago. In terms of what I think, almost all of my information (e.g., news, background checks, product pricing and reviews, reference material, general "reality" testing, etc.) now comes from the web. Although I work at a research institution, my students often look genuinely pained if I ask them to physically go to the library to check a reference, or (god forbid!) dig up something that is not online. In fact, I felt the same pain just recently when I had to traipse to the medical library (for the first time in three years) to locate some untranslated turn-of-the-century psychology by Wilhelm Wundt. Given the ubiquity and availability of Web content, how could one resist its influence? Although this content probably gets watered down as a function of distance from the source, consensual validation might offset the degradation. Plus, the Internet makes it easier to poll the opinions of trusted experts. So overall, the convenience and breadth of information on the Internet probably helps more than hurts me.

In terms of how I think, I fear that the Internet is less helpful. Although I can find information faster, that information is not always the most relevant, and is often tangential. More often than I'd like to admit, I sit down to do something and then get up bleary-eyed hours later, only to realize my task remains undone (or I can't even remember the starting point). The sensation is not unlike walking into a room, stopping, and asking "now, what was I here for?" â€" except that you've just wandered through a mansion and can't even remember what the entrance looked like.

This frightening "face-sucking" potential of the Web reminds me of conflicts between present and future selves first noted by ancient Greeks and Buddhists, and poignantly elaborated by philosopher Derek Parfit. Counterintuitively, Parfit considers present and future selves as different people. By implication, with respect to the present self, the future self deserves no more special treatment than anyone else.

Thus, if the present self doesn't feel a connection with the future self, then why forego present gratification for someone else's future kicks? Even assuming that the present self does feel connected to the future self, the only way to sacrifice something good now (e.g., reading celebrity gossip) for something better later (e.g., finishing that term paper) is to slow down enough to appreciate that connection, consider the conflict between present and future rewards, weigh the options, and decide in favor of the best overall course of action. The very speed of the Internet and convenience of Web content accelerates information search to a rate that crowds out reflection, which may bias me towards gratifying the salient but fleeting desires of my present self. Small biases, repeated over time, can have large consequences. For instance, those who report feeling less connected to their future self also have less in their bank accounts.

I suspect I am not the sole victim of Internet-induced "present self bias." Indeed, Web-based future self prostheses have begun to emerge, including software that tracks time off task and intervenes (ranging from reminders to blocking access to shutting programs down). Watching my own and others' present versus future self struggles, I worry that the Internet may impose a "survival of the focused," in which individuals gifted with some natural capacity to stay on target or who are hopped up on enough stimulants forge ahead, while the rest of us flail helplessly in some web-based attentional vortex. All of this makes me wonder whether I can trust my selves on the Internet. Or do I need to take more draconian measures â€" for instance, leave my computer at home, chain myself to a coffeehouse table, and draft longhand? At least in the case of this confessional, the future self's forceful but unsubtle tactics prevailed.

john_markoff's picture

Pulitzer Prize-winning Reporter, The New York Times; Author, Machines of Loving Grace

It's been three decades since Les Earnest, then assistant director of the Stanford Artificial Intelligence Laboratory, introduced me to the ARPAnet. It was 1979 and from his home in the hills overlooking Silicon Valley, he was connected via a terminal and a 2400 baud modem to Human Nets, a lively virtual community that explored the impact of technology on society.

It opened a window for me into an unruly cyberworld that at first seemed to be, to paraphrase the words of computer music researcher and composer John Chowning, a "Socratean Abode." Over the next decade-and-a-half I joined the camp of what I have since come to think of as "Internet Utopians." The Net seemed to offer this shining city-on-a-hill, free from the grit and foulness of the meat world. Ideologically this was a torch carried byWired Magazine, and the ideal probably reached its zenith in John Perry Barlow's 1996 "Declaration of the Independence of Cyberspace" essay.

Silly me. I should have known better. It would all be spelled out clearly in Brunner'sÂShockwave Rider; Gibson'sÂNeuromancer; Stephenson'sSnowcrash, Vinge'sÂTrue Names; and even less-well-read classics like Barnes'ÂThe Mother of Storms. Science fiction writers were always the best social scientists and in describing the dystopian nature of the Net they were again right on target.

There would be nothing even vaguely utopian about the reality of the Internet, despite preachy "The Road Ahead" vision statements by â€" late to the Web â€" luminaries like Bill Gates. This gradually dawned on me during the 1990s, driven home with particular force by the Kevin Mitnick affair. By putting every human on the planet directly in contact with every other, the Net opened a Pandora's Box of nastiness.

Indeed, while it was true that the Net skipped lightly across national boundaries, the demise of localism didn't automatically herald the arrival of a superior cyberworld. It simply accentuated and accelerated both the good and the bad, in effect becoming a mirror for all the world's fantasies and foibles.

Welcome to a bleak Bladerunner-esque world dominated by Russian, Ukrainian, Nigerian and American cyber-mobsters in which our every motion and movement is surveilled by a chorus of Big and Little Brothers.

Not only have I been transformed into an Internet pessimist, but recently the Net has begun to feel downright spooky. Not to be anthropomorphic but doesn't the Net seem to have a mind of its own? We've moved deeply into a world where it is leaching value from virtually every traditional institution in the name of some borg-like future. Will we all be assimilated, or have we been already? Wait! Stop me! That was The Matrix wasn't it?

tim_oreilly's picture

Founder and CEO, O'Reilly Media, Inc.; Author, WTF?: What's the Future and Why It's Up to Us

Many years ago, I began my career in technology as a technical writer, landing my first job writing a computer manual on the same day that I saw my first computer. The one skill I had to rely on was one I had honed in my years as a reader, and in my university training in Greek and Latin classics: the ability to follow the breadcrumb trail of words back to theirÂmeaning.

Unfamiliar with the technology I was asked to document, I had to recognize landmarks and to connect the dots, to say "these things go together." I would read a specification written by an engineer, over and over, until I could read it like a map, and put the concepts in the right order, even if I didn't fully understand them yet. That understanding would only come when I followed the map to its destination.

Over the years, I honed this skill, and when I launched my publishing business, the skill that I developed as an editor was the skill of seeing patterns. "Something is missing here." "These two things are really the same thing seen from different points of view." "These steps are in the wrong order." "In order for x to make sense, you first have to understand y." Paula Ferguson, one of the editors I hired, once wrote that "all editing is pattern matching." You study a document, and you study what the document is talking about, and you work on the document until the map matches the territory.

In those early years of trying to understand the industry I'd been thrust into, I read voraciously, and it was precisely because I didn't understand everything that I read that I honed my ability to recognize patterns. I learned not as you are taught in school, with a curriculum and a syllabus, but with the explorations of a child, who composites a world-view bit by bit out of the stuff of everyday life.

When you learn in this way, you tell your own story and draw your own map. When my co-worker Dale Dougherty created GNN, the Global Network Navigator, the first commercial web portal, in 1993, he named it after The Navigator, a 19th-century handbook that documented the shifting sandbars of the Mississippi River.

Over the years, my company has been a map-maker in the world of technology, spotting trends, documenting them, and telling stories about where the sandbars lie, the portages that cut miles off the journey, as well as the romance of travel and the glories of the destination. In telling stories to explain what we've learned and encourage others to follow us into the West, we've become not just mapmakers but meme makers. Open Source, Web 2.0, the Maker movement, Government as a Platform are all stories we've had a role in telling.

It used to be the case that there was a canon, a body of knowledge shared by all educated men and women. Now, we need the skills of a scout, the ability to learn, to follow a trail, to make sense out of faint clues, and to recognize the way forward through confused thickets. We need a sense of direction that carries us onward through the wood despite our twists and turns. We need "soft eyes" that take in everything we see, not just what we are looking for.

The information river rushes by. Usenet, email, the world wide web, RSS, twitter: each generation carrying us faster than the one before.

But patterns remain. You can map a river as well as you can map a mountain or a wood. You just need to remember that the sandbars may have moved the next time you come by.

terence_koh's picture

i am very interested in the Internet, especially right now.

the Internet is a completely new form of sense.

as a human, i have experienced reality; as have the rest of my species since we had the ability to self-realize, as a combination of what we see, smell, feel, hear, and taste.

but the Internet, and this is a term i think that is beyond the idea of just the Web on a computer (Websites, emails, blogs, twitter, google etc) that is become "something" that i cannot myself really define yet.

the Internet is really growing beyond this "something" so that even if someone does not have a computer, the Internet still affects them.

so this is very interesting because the Internet is becoming a new form of sense that has not existed since we became to self realize as humans.

and because this affects everybody, i feel that thinking about what the Internet is now must always come back to myself as an individual. cause it is becoming more and more important to see how our individual thoughts and actions affect everything else around us. it all still starts with the "i" with me.

a new collective sense of "i" is the Internet...

so that there is a new form of "i" that is also "we" at the same time because we are all involved with it.

i am not sure if i am answering your question, as it is a question that i do think about consiously everyday now but can't quite figure out.

and forgive me if i may sound like a bad sciece fiction writer, but if i may give any direction to your question, i think that the Internet is probably going to evolve by itself very very soon to give you better answers that i can hopefully ever give.

and i do not think i would even know it myself when that happens.

that is quite a scary thought.

tom_mccarthy's picture

Artist & Writer; Author: Remainder, Men in Space

'How has the Internet changed the way you think?' It hasn't.

Western culture has always been about networks: look at Clytemnestra's 'beacon telegraph' speech in the Oresteia, or the relay-system of oracles and crytpic signals Oedipus has to navigate. Look at Schreber's vision of wires and nerves, or Kafka and Rilke's visions of giant switchboards linking mortals to (and simultaneously denying them access to the source-code of) gods and angels. Or the writings of Heidegger, or Derrida: meshes, relays, endless transmission. The Internet reifies a logic that was always already there.

gloria_origgi's picture

Philosopher and Researcher, Centre National de la Recherche Scientifique, Paris; Author, Reputation: What it is and Why it Matters

I spend more than half of my working hours doing my email: I have 4407 messages in my Gmail Inbox today: stuff that I haven't read yet, that I have to reply to, or that I keep in the Inbox just to take advantage of the search facilities and be able to easily retrieve it when needed.

Each time I find myself in the end of the afternoon still writing messages to friends, colleagues, perfect strangers, students, etc. I have the guilty feeling of having wasted my day, as the weakness of my will had prevailed on any sense of duty and intellectual responsibility. Psychological reactions can be harsh to the point of inflicting myself various forms of punishment such as imprisonment in a dusty Parisian library without Internet connection or voluntary switching off of the modem at my place. That is because I have the precise idea that my work is NOT writing emails: rather it is a matter of writing papers and learned essays on philosophy and related issues.

But what is philosophy? What is academic work in general, at least in the humanities? One of my mentors once said to me: Being an academic just means being part of a conversation. That's it. Plato used the dialogue as a form of expression to render in a more vivid way the dialectic process of thinking and constructing knowledge from open verbal confrontation. One of the books that influenced me most during my undergraduate philosophical studies in Italy was Galileo's Dialogue on theÂTwo Chief World Systems. I read on theÂEdge site thatÂEdge is a conversation. So, what is so bad about email conversations that are invading my life? What is the big difference between the contemplative state in front of the blank page of a new paper and the excited exchange through Gmail or skype with a colleague living in another part of the world?

My intellectual life started to get much better when I realized that the difference is not that much: that even papers and comments to the papers, reviews, replies, etc. are conversations at slow motion. I write a paper for an academic journal, the paper is evaluated by other philosophers who suggest improvements, it is then disseminated to the academic community in order to prompt new conversations on a topic or launch new topics for discussion. That is the rule of the game. And if I make an introspective effort and try to visualize my way of thinking, I realize that I am never alone in my mind: a number of more or less invited guests are sitting around somewhere in my brain, challenging me when I claim with overconfidence this and that or when I definitely affirm my resolution to act in a certain way.

Arguing is a basic ingredient of thinking: our way of structuring our thought would have been very different without the powerful tool of verbal exchange. So, let's acknowledge that the Internet allows us to think and write in a much more natural way than the one imposed by the written culture tradition: the dialogical dimension of our thinking is now enhanced by continuous, liquid exchanges with others.

The way out of the guilty feeling of wasting our time is to commit ourselves to interesting and well articulated conversations, as we accept invitations to dinners in which we hope to have a stimulating chat and not falling asleep after the second glass of wine. I run a Website that keeps track of high-level, learned conversations between academics. I find that each media produces its wastes: most books are just noise that disappears few months after the first release. I don't think we should concentrate of the wastes, rather, we should try to make a responsible use of our conversational skills and free ourselves from unreal commitments to accidental formats, such as the book or the academic paper, whose authoritative role depends on the immense role they played in our education.

If it happens that what we will leave to the next generation are threads of useful and learned conversations, then be it: I see this as an improvement in our way of externalizing our thinking, a much more natural way of being intelligent in a social world.

stephen_m_kosslyn's picture

Founding Dean, Minerva Schools at the Keck Graduate Institute

Other people can help us compensate for our mental and emotional deficiencies, much as a wooden leg can compensate for a physical deficiency. Specifically, other people can extend our intelligence and help us understand and regulate our emotions. I've argued that such relationships can become so close that other people essentially act as extensions of oneself, much like a wooden leg can serve as an extension of oneself. When another person helps us in such ways, he or she is participating in what I've called a "Social Prosthetic System." Such systems do not need to operate face-to-face, and it's clear to me that the Internet is expanding the range of my Social Prosthetic Systems. The Internet is already an enormous repository of the products of many minds, and the interactive aspects of the evolving Internet are bringing it ever closer to the sort of personal interactions that underlie Social Prosthetic Systems.

Even in its current state, the Internet has extended my memory, perception, and judgment.

Regarding memory: Once I look up something on the Internet, I don't need to retain all the details for future use â€" I know where to find that information again, and can quickly and easily do so. More generally, the Internet functions as if it is my memory. This function of the Internet is particularly striking when I'm writing; I no longer am comfortable writing if I'm not connected to the Internet. It's become completely natural to check facts as I write, taking a minute or two to dip into PubMed, Wikipedia, or the like. When I write with a browser open in the background, it feels like the browser is an extension of myself.

Regarding perception: Sometimes I feel as if the Internet has granted me clairvoyance: I can see things at a distance. I'm particularly struck by the ease of using videos, allowing me to feel as though I've witnessed a particular event in the news. It's a cliché, but the world really does feel smaller.

Regarding judgment: The Internet has made me smarter, in matters small and large. For example, when writing a textbook it's become second nature to check a dozen definitions of a key term, which helps me to distill the essence of its meaning. But more than that, I now regularly compare my views with those of many other people. If I have a "new idea," I now quickly look to see whether somebody else has already had it, or conceived of something similar â€" and I then compare and contrast what I think with what others have thought. This inevitably hones my own views. Moreover, I use the Internet for "sanity checks," trying to gauge whether my emotional reactions to an event are reasonable, quickly comparing them to those of others.

These effects of the Internet have become even more striking since I've used a smart phone. I now regularly pull out my phone to check a fact, to watch a video, and to read blogs. Such activities fill the spaces that used to be dead time (such as waiting for somebody to arrive for a lunch meeting).

But that's the upside. The downside is that when I used to have those dead periods, I often would let my thoughts drift, and sometimes would have an unexpected insight or idea. Those opportunities are now fewer and farther between. Like anything else, constant connectivity has posed various tradeoffs; nothing is without a price. But in this case, I think â€" on balance â€" it's a small price to pay. I am a better thinker now than I was before I integrated the Internet into my mental and emotional processing.

jonas_mekas's picture

Film-Maker, Critic; Co-founder, Film-Makers' Cooperative, Filmmaker’s Cinematheque, Anthology Film Archives

I am a farmer boy. When I grew up, there was only one radio in our entire village of twenty families. And, of course, no TV, no telephone and no electricity. I saw my first movie when I was fourteen.

In New York, in 1949, I fell in love with cinema. In 1989 I switched to video. In 2003 I embraced computer/Internet technologies.

I am telling you this to indicate that my thinking is now only entering the Internet Nation. It's still in its infancy, I am not really thinking yet Internet way — I am only babbling.

But I can tell you that it has already affected the content, form and the working procedures of everything that I do. It's entering my mind secretly, indirectly.

In 2007 I did a project, 365 Day Project. I put on Internet one short film every day. In cinema, when I was making my films, it was very abstract. I could not think about the audience. I knew the film will be placed in a film distribution center and eventually someone will look at it. Now, in my 365 Day Project I knew that later, same day, I will put it on Internet and within minutes it will be seen by all my friends, and strangers too, all over the world. So that I felt like I was conversing with them. It's intimate. It's poetic. I am not thinking anymore about problems of distribution. I am just exchanging my work with some friends. Like being part of a family. I like that. It makes for a different state of mind. If a state of mind has anything or nothing to do with thinking, that's unimportant to me. I am not exactly a thinking person. I am a poet.

I would like to add one more note to what the Internet has done to me. And that is, I began paying more attention to everything that the Internet seems to be eliminating.Books especially. But also nature. In short: the more it all expands into the virtual reality the more I feel a need to love and protect the actual reality. Not because of sentimental reasons, no. I do that from a very real, practical , almost a survival need: from my knowledge that I would lose a very essential part of myself by losing the actual reality, both cultural and physical.

kai_krause's picture

Software Pioneer; Philosopher; Author, A Realtime Literature Explorer

One look the 'most active search terms', called 'Google Zeitgeist', or the current 'TV ratings winners', or MTV's 'top ten musical artists' and I get the uncanny feeling of being surrounded by an alien race of humanoids.Â

Who are these people? And what are they doing with these glorious resources ?

That perception ofÂdesperate solitude has probably always been a central part of any sane and rational thinker â€" as well as less sane and irrational artist. A highly intense love-hate relationship of an active mind towards the teeming lemming millions surrounding and suffocating him. Now enter: the Web.

Has the Internet changed my own thinking? Dramatically so.Â

Not at the neuron level, but more abstractly: it completely redefined how we perceive the world and ourselves in it, new models of how we work and research, entertain ourselves, communicate with our family and friends, how we learn about the past and preserve our memories, what we expect of the future and how we plan for it, what we watch, read, listen to: all greatly influenced by technology in general and the Net in particular.

But it is a double-edged sword, a yinyang yoyo of the good, the bad and the ugly.

Long ago I stopped expecting 'the world as such' and 'society as a whole' to provide solutions for me on a silver plate. The only sensible strategy is an eclectic path to define quality of life for yourself, and use all tools in whatever customized fashion to forge your path.

In other words: the planet is in shambles, butÂyou can try to help and still carve out a meaningful, peaceful & happy existence on it.

The Internet is the epitome of that concept: barely in its infancy, in a deplorable state between 'not quite there yet' and 'already half fallen apart', unruly chaos, ugly, confused, appealing to the worst base instincts, but:Âyou can use it in entirely unprecedented ways to enhance your life ambitions, with more choices, options and knowledge than any crowned heads in history.

But it is worth contrasting the euphoria with a taste of the dystopia.

Not the obvious topics like terror and child porn â€" the lesser but mind numbingly pervasive evils unnerve me: virus, trojan & phishing scams, incessant Nigerian cash crap, shrink your debt, lengthen your penis, news lite going gaga over Gaga, while teens are violently 'happy slapping' and ultracore pr0n swapping, guys with tattoed faces play ego shooters with death metal screams...Â

...the tip of a dysfunctional iceberg.

Being there during the very early days of computing and the Net, I cannot help but compare the vision, the hope and the theory with the reality we find ourselves in decades later. There were such lofty expectations usingmultimedia in education and learning but already soon after, with Douglas Adams in a series of roundtable appearance in the nineties, we called it"multimediocrity".

No one then expected theÂextent of this seething underbelly, or the pathetic forms it would take.

A Byron poem, interrupted byÂhemorrhoids ointment ads? Clicking it you get:"Now! New! Find the best deals on hemorrhoids!"

I cringe, in several places.

Brockman's mail arrived...in theÂGmail spam folder. I noticed the ad at the top:Â"Creamy Spam Broccoli Casserole" it said.Â"Serves Eight"

Silly and cynical, but not so bad.

Writing to a friend I beganÂ"we nearly died laughing", but even before finishing the paragraph, Google ads showed "funeral plots" & "discount caskets"

Morbid, but not so bad?

Watching an unbelievably beautiful video of Hubble probing the edge of space: unfathomable 17.000 comments, but half of them inane, gross, with atrocious spelling, insults from childish name-calling, immature outbursts, vicious moronic bullying to outright gibberish insanity. Reading YouTube comment threads can make you sense the end of the world as we knew it.Â

How sad, but I guess one doesn't have to look?

But that's not an acceptable answer. It is notÂjust silly, cynical or morbid. It is all too easy to look away and cling to our personal list of "fave cool stuff" while the seams are showing, the veneer is loose.Â

The ethereal beauty also contains lethal ether to the less fortunate non-digerati, such as the children or the elderly.

The Internet brings the promise of connecting it all.Â

But it could also connect it all... into one gigantic mess.Â

The sum-total of human lack of knowledge.

Of course there are many positive counter examples. I cling to them daily. Wikipedia itself is a miracle of sorts, and incidentally,Âedge.org must be cited as a hidden gem. Actually, it is more like a 19th century salon, (no interactivity, not even a forum or comments) and ultimately these essays will be read â€" as a book! Telling and charming.

In my sixth decade now, I always had a wholehearted passion for new horizons, searching out the newest tools possible. I got into synthesizers in the late sixties to create sounds no one had heard before, then into computer graphics in the seventies to make images no one had ever seen.Â

And soon I became aÂtool maker myself and active in the emerging online world from ArpaNet, the Well to UseNet, creating daily chatrooms about pixels & philosophy, years before the Web even began.Â

So this is not a quick quip by some Luddite or Noob who 'doesn't get it', but rather a profound objection by a saddened observer since the earliest days, clinging to his deeply appreciative fascination for the immense potential.

Last decade I spent cocooned, quietly thinking about approaches, solutions, ideas. There is much to say, which, however, the margin is not large enough to contain.

Eventually, itÂwill all get there, just as it alwaysÂdid spiral forwards and evolve, from Newton to Einstein just as from Newton to iPhone.

The Net will not reach its true potential in my little lifetime. But it surely has influenced the thinking in my lifetime like nothing else ever has.

thomas_metzinger's picture

Professor of Theoretical Philosophy, Johannes Gutenberg-Universität Mainz; Adjunct Fellow, Frankfurt Institute for Advanced Study; Author, The Ego Tunnel

I heard a strange, melodic sound from the left and turned away from the Green Woman. As I shifted my gaze towards the empty landscape, I noticed that something wasn't quite right. The new visual scene, the hills and the trees, were as real as it could be â€" but somehow it just hadn't come into sight as it would in real life, had I turned my head as I would normally. Somehow it wasn't quite real-time. The way the visual scene popped up had a slightly different temporal dynamics, an almost unnoticeable delay â€" as if I was surfing the Web, clicking my way on to another page. But I certainly wasn't surfing! I had just talked to the Green Woman, and no!, my right index finger wasn't clicking, and my right hand certainly wasn't lying on a mouse pad â€" it hung down from the side of my body, completely relaxed, as I gazed into the empty landscape of hills and trees. In a flash of excitement and disbelief it dawned on me: I was dreaming!Â

Lucid dreams are something I have always been interested in, and have written about extensively. For consciousness researchers lucid dreams are interesting, because you can go for a walk through the dynamics of your own neural correlate of consciousness, unconstrained by external input, and look at the way it unfolds, from the inside. For philosophers they are certainly interesting too. You can ask dream characters you encounter what they think about notions like "virtual embodiment" and "virtual selfhood" â€" and if they actually believe they have a mind of their own. Unfortunately, I have lucid dreams only rarely â€" once or twice a year. The episode above was the beginning of my last one, and a lot of things dawned on me at once, not just the fact that I was actually all inside my own head: The Internet is reconfiguring my brain, it changes not only the way in which I think. The influence is much deeper; it already penetrates my dream life. Sure, for academics the Internet is a fantastic resource â€" almost all of the literature at your fingertips, unbelievably efficient ways of communicating and cooperating with researchers around the world, an endless source of learning and inspiration. Something that leads you right into attention deficit disorder. Something that gets you hooked. Something that confronts you with your greed. Something that is already changing us in our deepest core.

This is about much more than cognitive style alone: For those of us intensively working with it, the Internet has already become a part of our self-model. We use it for external memory storage, as a cognitive prosthesis, and for emotional autoregulation. We think with the help of the Internet, and it helps us determine our desires and goals. Affordances infect us, subtly eroding the sense of control. We are learning to multitask, our attention span is becoming shorter, and many of our social relationships are taking on a strangely disembodied character. Some software tells us "You are now friends with Peter Smith!" â€" when we were just too shy to click the "Ignore" button.

"Online addiction" has long become a technical term in psychiatry. Many young people (including an increasing number of university students) suffer from attention deficits and are no longer able to focus on old-fashioned, serial symbolic information; they suddenly have difficulty reading ordinary books. Everybody has heard about midlife burnout and rising levels of anxiety in large parts of the population. Acceleration is everywhere.

The core of the problem is not cognitive style, but something else: attention management. The ability to attend to our environment, to our own feelings, and to those of others is a naturally evolved feature of the human brain. Attention is a finite commodity, and it is absolutely essential to living a good life. We need attention in order to truly listen to others â€" and even to ourselves. We need attention to truly enjoy sensory pleasures, as well as for efficient learning. We need it in order to be truly present during sex, or to be in love, or when we are just contemplating nature. Our brains can generate only a limited amount of this precious resource every day. Today, the advertisement and entertainment industries are attacking the very foundations of our capacity for experience, drawing us into the vast and confusing media jungle. They are trying to rob us of as much of our scarce resource as possible, and they are doing so in ever more persistent and intelligent ways. We know all that. But here is something we are just beginning to understand â€" that the Internet affects our sense of selfhood, and on a deep functional level.

Consciousness is the space of attentional agency: Conscious information is exactly that information in your brain to which you can deliberately direct your attention. As an attentional agent, you can initiate a shift in attention and, as it were, direct your inner flashlight at certain targets: a perceptual object, say, or a specific feeling. In many situations, people lose the property of attentional agency, and consequently their sense of self is weakened. Infants cannot control their visual attention; their gaze seems to wander aimlessly from one object to another, because this part of their Ego is not yet consolidated. Another example of consciousness without attentional control is the non-lucid dream state. In other cases, too, such as severe drunkenness or senile dementia, you may lose the ability to direct your attention â€" and, correspondingly, feel that your "self" is falling apart.

If it is true that the experience of controlling and sustaining your focus of attention is one of the deeper layers of phenomenal selfhood, then what we are currently witnessing is not only an organized attack on the space of consciousness per se but a mild form of depersonalization. New medial environments may therefore create a new form of waking consciousness that resembles weakly subjective states â€" a mixture of dreaming, dementia, intoxication, and infantilization. Now we all do this together, every day. I call it Public Dreaming.

andrian_kreye's picture

Editor-at-large of the German Daily Newspaper, Sueddeutsche Zeitung, Munich

I think faster now. The Internet has somewhat freed me â€" of some of 20th century's burdens. The burden of commuting. The burden of coordinating communication. The burden of traditional literacy. I don't think the Internet would be of much use, if hadn't carried those burdens to excess all through my life. If speeding up thinking continually constitutes changing the way I think though, the Internet has done a marvelous job.

I wasn't an early adaptor, but the process started early. I didn't quite understand yet what would come upon us, when Marvin Minsky told me one afternoon in 1989 at MIT's Media Lab the most important trait of a computer wouldn't be it's power, but what it would be connected to. A couple of years later I stumbled upon the cyberpunk scene in San Francisco. People were popping smart drugs (which didn't do anything), Timothy Leary declared virtual reality the next psychedelics (which never panned out), Todd Rundgren warned of a coming overabundance of creative work without a parallel rise in great ideas (which is now reflected in the laments about the rise of the amateur). It was still the old underground running the new emerging culture. This new culture was driven by thought rather than art though. It's also where I met Cliff Figallo who ran a virtual community called The Well. He introduced me to John Perry Barlow who had just started a foundation called the Electronic Frontier Foundation. The name said it all. There was a new frontier.

It would still take me a few more years to grasp. One stifling evening in a rented apartment in downtown Dakar my photographer and me disassembled a phone line and a modem to circumvent some incompatible jacks and to get our laptop to dial up some node in Paris. It probably saved us a good week of research in the field. Now my thinking started to take on the speed I had sensed in Boston and San Francisco. Continually freeing me of the aforementioned burdens, it has allowed me to focus even more on the tasks expected of me as a journalist â€" find context, meaning and a way to communicate complex topics in the simplest of ways.

One important development that has allowed this to happen is that the possibly greatest of all traits the Internet has developed over the past few years is that it has become inherently boring. Gone are the adventurous days of using a pocket knife to log onto Paris from Africa. Even in remote place of this planet logging onto the Net means merely turning on your machine. This paradigm reigns all through the Web. Twitter is one of the simplest Internet applications ever developed. Still it has sped up my thinking in ever more ways. Facebook in itself is dull, but it has created new networks not possible before. Integrating all media into a blog has become so easy, grammar school kids can do it, so that freeform forum has become a great place to test out new possibilities. I don't think about the Internet anymore. I just use it.

All this might not constitute a change in thinking though. I haven't changed my mind or my convictions because of the Internet. I haven't had any epiphanies while sitting in front of a screen. The Internet so far has not given me no memorable experiences, although it might have helped to usher some along. It has always been people, places and experiences that have changed the way I think and provided me with a wide variety of memorable experiences.

geoffrey_miller's picture

Evolutionary psychologist, NYU Stern Business School and University of New Mexico; author of The Mating Mind and Spent

The Internet changes every aspect of thinking for the often-online human: perception, categorization, attention, memory, spatial navigation, language, imagination, creativity, problem-solving, Theory of Mind, judgment, and decision-making. These are the key research areas in cognitive psychology, and constitute most of what the human brain does.ÂBBC News andÂThe Economist Website extend my perception, becoming my sixth sense for world events. Gmail structures my attention through my responses to incoming messages: delete, respond, or star for response later? Wikipedia is my extended memory. An online calendar changes how I plan my life. Google Maps change how I navigate through my city and world. FaceBook expands my Theory of Mind â€" better understanding the beliefs and desires of others.

But for me, the most revolutionary change is in my judgment and decision-making â€" the ways I evaluate and choose among good or bad options. I've learned that I can offload much of my judgment on to the large samples of peer ratings available on the Internet. These, in aggregate, are almost always more accurate than my individual judgment. To decide which Blu-ray disks to put in my Netflix cue, I look at the average movie ratings on Netflix, IMDB, and Metacritic. These reflect successively higher levels of expertise among the raters â€" movie renters on Netflix, film enthusiasts on IMDB, and film critics on Metacritic. Any film with high ratings across all three sites is almost always exciting, beautiful, and thoughtful.

My fallible, quirky, moody judgments are hugely enhanced by checking average peer ratings: book and music ratings on Amazon, used car ratings on Edmunds, foreign hotel ratings on Tripadvisor, and citations to scientific papers on Google scholar. We can finally harness the Law of Large Numbers to improve our decision-making: the larger the sample of peer ratings, the more accurate the average. As ratings accumulate, margins of error shrink, confidence intervals get tighter, and estimates improve. Ordinary consumers have access to better product-rating data than market researchers could hope to collect.

Online peer ratings empower us to be evidence-based about almost all of our decisions. For most goods and services, and indeed most domains of life, they offer the consumer a kind of informal meta-analysis â€" an aggregation of data across all the analyses already performed by other like-minded consumers. Judgment becomes socially distributed and statistical rather than individual and anecdotal.

Rational-choice economists might argue that sales figures are a better indication than online ratings of real consumer preferences, insofar as people vote with their dollars to reveal their preferences. This ignores the problem of buyer's remorse: consumers buy many things that they find disappointing. Their post-purchase product ratings mean much more than their pre-purchase judgments. Consumer Reports data on car owner satisfaction ('Would you buy your car again?') are much more informative than new-car sales figures. Metacritic ratings of theÂTwilight movies are more informative about quality than first-weekend box office sales. Informed peer ratings are much more useful guides to sensible consumer choices than popularity-counts, sales volumes, market share, or brand salience.

You might think that post-purchase ratings would be biased by rationalization â€" I bought product X, so it must be good, or I'd look like a fool. No doubt that happens when we talk with friends and neighbors, but the anonymity of most online ratings reduces the embarrassment effect of admitting one's poor judgments and wrong decisions.

Of course, peer ratings of any product, like votes for a politician, can be biased by stupidity, ignorance, fashion cycles, mob effects, lobbying, marketing, and vested interests. The average online consumer's IQ is only a little above 100 now, and their average education is just a couple of years of college. Runaway popularity can be mistaken for lasting quality. Clever ads, celebrity endorsements, and brand reputations can bias the judgment of even the most independent-minded consumers. Rating sites can be gamed and manipulated by retailers. Nonetheless, online peer ratings remain more useful than any other consumer-empowerment movement in the last century.

To use peer ratings effectively, we have to let go of our intellectual and aesthetic pretensions. We have to recognize that some of our consumer judgments served mainly as conspicuous displays of our own intelligence, openness, taste, or wealth, and are not really the best way to choose the best option. We have to learn some humility. My best recent movie-viewing experiences have all come from valuing the Metacritic ratings over my own assumptions, prejudices, and pre-judgments. In the process, I've learned a new-found respect for the collective wisdom of our species. This recognition that my own thinking is not so different from, or better than, everyone else's, is one of the Internet's great moral lessons. Online peer ratings reinforce egalitarianism, mutual respect, and social capital. Against the hucksterism of marketing and lobbying, they knit humanity together into collective decision-making systems of formidable power and intelligence.

jaron_lanier's picture

Computer Scientist; Musician; Author, Who Owns The Future?

The Internet as it evolved up to about the turn of the century was a great relief and comfort to me, and influenced my thinking positively in a multitude of ways. There were the long-anticipated quotidian delights of speedy information access and transfer, but also the far more important optimism born from seeing so many people decide to create Web pages and become expressive, proving that the late 20th century's passive society on the couch in front of the TV was only a passing bad dream.

In the last decade, the Internet has taken on unpleasant qualities, and has become gripped by reality-denying ideology.

The current mainstream, dominant culture of the Internet is the descendant of what used to be the radical culture of the early Internet. The ideas are unfortunately motivated to a significant degree by a denial of the biological nature of personhood. The new true believers attempt to conceive of themselves as becoming ever more like abstract immortal information machines, instead of messy, mortal, embodied creatures. This is nothing but yet another approach to an ancient folly; the psychological denial of ageing and dying. To be a biological realist today is to hold a minority opinion during an age of profound, overbearing, technologically-enriched groupthink.

When I was in my twenties, my friends and I we were motivated by the eternal frustration of young people that they are not immediately all made rulers of the world. It used to seem supremely annoying to my musician friends, for instance, that the biggest stars, like Michael Jackson, would get millions of dollars in advance for an album, while an obscure, minor artist like me would only get $100K advance to make one (and this was in early 1990's dollars.)

So what to do? Kill the whole damned system! Make music free to share, and demand that everyone build reputation on a genuine all-to-all network instead of a broadcast network, so that it would be fair. Then we'd all go out and perform to make money, and the best musician would win.

The lecture circuit was particularly good to me as a live performer. My lecture career was probably one of the first of its kind that was driven mostly by my online presence. (In the old days, my crappy Web site got enough traffic to merit coverage as an important Web site by the mainstream media like theÂNew York Times.) It seemed as though money was available on tap.

Seemed like a sweet way to run a culture back then, but in the bigger picture, it's been a disaster. Only a tiny, token number of musicians, if any, do as well within the new online utopia as even I used to do in the old world, and I wasn't particularly successful. Every musician I have been able to communicate with about their true situation, including a lot of extremely famous ones, has suffered after the vandalism of my generation, and the reason isn't abstract but because of biology.

What we denied was that we were human and mortal, that we might someday have wanted children, even though it seemed inconceivable at the time. In the human species, neoteny, the extremely slow fading of our juvenile characteristics, has made child rearing into an extreme, draining long-term commitment.

That is the reality. We were all pissed at our own parents for not coming through in some way or other, but evolution has extended the demands of human parenting to the point that it is impossible for parents to come through well enough, ever. Every child must be disappointed to some degree because of neoteny, but economic and social systems can be designed to minimize the frustration. Unfortunately the Internet, as it has come to be, maximizes it.

The way that neoteny relates to the degradation of the Internet is that as a parent, you really can't go running around to play gigs live all the time. The only way for a creative person to live with what we can call dignity is to have some system of intellectual property to provide sustenance while you're out of your mind with fatigue after a rough night with a sick kid.

Or, spouses might be called upon to give up their own aspirations for a career, but there was this other movement called Feminism happening at the same time that made that arrangement less common.

Or, there might be a greater degree of socialism to buffer biological challenges, but there was an intense libertarian tilt coincident with the rise of the Internet in the USA. All the options have been ruled out, and the result is a disjunction between true adulthood and the creative life.

The Internet, in its current fashionable role as an aggregator of people through social networking software, only values humans in real time and in a specific physical place, that is usually away from their children. The human expressions that used to occupy the golden pyramidion of Maslow's pyramid, are treated as worthless in themselves.

But dignity is the opposite of real time. Dignity means, in part, that you don't have to wonder if you'll successfully sing for your supper for every meal. Dignity ought to be something one can earn. I have focused on parenting here, since it is what I am experiencing now, but the principle becomes even more important as people become ill, and then even more as people age. So, for these reasons and many others, the current fashionable design of the Internet, dominated by so-called social networking designs, has an anti-human quality. But very few people I know share my current perspective.

Dignity might also mean being able to resist the near-consensus of your peer group.

dave_morin's picture

Internet Entrepreneur; Angel Investor

My generation is the first generation that has lived their entire lives with the Internet. The Internet is how we think. We have developed a way of thinking that depends on being connected to an ever changing graph of all the world’s people and ideas. The Internet helps to define, evolve, and grow us. The Internet is social. The Internet is a way of life. The Internet provides context.

Because I have lived most of my life with the Internet, it has been the increasing the addition of new contexts which has been the thing which has most changed the way I think. In the beginning, the Internet was a giant mess of unstructured, unorganized, identity-free data spread across un-connected computers all over the world.

Then things started to change. Organizations and companies began to structure and provide context to the documents and data housed in this expanding network of the world’s computers.

Opening, connecting, and organizing the information on the world’s computers has enabled us to search for the answers to our most important questions and to provide more context to the information in our lives.

Once the world’s information was put into context, we looked beyond the keyboard, and collectively shifted to people. We focused on social context by asking questions like: Who are you? How are we connected? What is on your mind? What matters to you?

Making the Internet more social enabled people to share their real name, likeness, voice, and the things that they are connected to. Now we always have an understanding of who is talking, who and what they are connected to, what they are saying, and to whom; through understanding identity and social context we have achieved greater openness as a society.

In the future, the challenge will be continuing to add new contexts and improve existing ones in order to help people live better, happier, lives. So that no matter where you are, what you are doing, who you are with, or what you are thinking, it is always in context.

joseph_ledoux's picture

Professor of Neural Science, Psychology, Psychiatry, and Child and Adolescent Psychiatry, NYU; Director Emotional Brain Institute; Author, Anxious

A woman witnesses a crime and recounts it to a policeman. Months later she appears in court to testify. As her story unfolds, it begins to differ from the notes taken by the policeman. A journalist covering the case notices that her testimony includes things she could not have known at the time but that were later discovered and that appeared in his newspaper. Though intensely grilled by the DA, she sticks by her story.

Why did her memory change? Why didn't she know the difference between what she experienced and what she read in the paper? The short answer is that remembering is a dangerous affair in the life of a memory. A slightly longer answer requires that we delve into the mechanisms that store memories.

Memory formation occurs in stages. Initially, a temporary or short-term memory is formed. This memory is fragile and will dissipate unless it is converted into a long-term memory through protein synthesis inside the neurons that processed the experience. The new proteins stabilize the synaptic connections that constitute memory at the cellular level. If protein synthesis is disrupted in the hours following the experience, a long-term memory does not result. The conversion of short-term into long-term memory via protein synthesis is called consolidation.

It has also been found that disruption of protein synthesis after the remembrance of a fully consolidated long-term memory produces a loss of the memory. This is taken to mean that when memories are retrieved they have to beÂreconsolidatedÂvia protein synthesis in order to persist.

Reconsolidation is essentially an updating process. After consolidation, a memory remains unchanged until it is retrieved. At that point, the brain has the opportunity to incorporate new information into the memory, things that have been learned since the memory was stored initially. I haven't thought about theÂEdge Annual Question since last year, but now that I have been forced to remember it, my memory of it includes the new question.

So far so good. But considerable research now suggests that reconsolidation can overwrite previous memories. That is, the old memory is eliminated and the new one involves a collage of old and new information. This integration process determines what we will remember the next time. When our witness read the newspaper account, the old memory was retrieved and new information was integrated with the old information. She was unable to tell the difference between what she experienced and what she later learned because it was now one memory. Laboratory studies in fact show that people are not very good at remembering what they actually experienced, and often make mistakes that involve the insertion of new information into a memory.

The bottom line of reconsolidation research is that your memory of some experience is only as good as your last recollection of the experience. Each use of a memory changes the memory. Obviously, the changes are not always so dramatic as what I have described. But the fact is that memory can, at least to some extent, be changed by experience, and sometimes the changes can be striking.

There a number of practical implications of this research. One is that it might be possible to relieve emotional stress by having people remember their stressful experiences and then interfering with reconsolidation. This is pretty much what happened to Jim Carey's character inÂThe Eternal Sunshine of the Spotless Mind. But there is also evidence that it works in real life situations with trauma victims. Studies in rats also suggest that this same approach can be used to reduce the ability of drug-related cues to produce relapse.

Memory works pretty well most of the time. But we should be careful as a society when we make significant decisions on the basis of one person's memory. The only way a memory remains "pure" and resistant to change is by never being used. The most accurate memories are indeed the ones never remembered. Be careful about what you remember.

evgeny_morozov's picture

Contributing Editor, Foreign Policy; Syndicated Columnist; Author, The Net Delusion

As it might take decades for the Internet to rewire how our brains actually process information, we should expect that most immediate changes would be social rather than biological in nature. Of those, two bother me in particular. One has to do with how the Internet changes what we think about; the other one â€" with who gets to do the thinking.

What I find particularly worrisome with regards to the "what" question is the rapid and inexorable disappearance of retrospection and reminiscence from our digital lives. One of the most significant but overlooked Internet developments of 2009 â€" the arrival of the so-called "real-time Web", whereby all new content is instantly indexed, read, and analyzed â€" is a potent reminder that our lives are increasingly lived in the present, completely detached even from the most recent of the pasts. For most brokers dealing on today's global information exchange, past is a "strong sell".

In a sense, this is hardly surprising: the social beast that has taken over our digital lives has to be constantly fed with the most trivial of ephemera. And so we oblige, treating it to countless status updates and zetabytes of multimedia (almost a thousand photos are uploaded to Facebook every second!). This hunger for the present is deeply embedded in the very architecture and business models of social networking sites. Twitter and Facebook are not interested in what we were doing or thinking about five years ago; it's what we are doing or thinking about right now that they would really like to know.

These sites have good reasons for such a fundamentalist preference for the present, as it it greatly enhances their ability to sell our online lives to advertisers: after all, much of the time we are thinking of little else but satisfying our needs, spiritual or physical, and the sooner our needs can be articulated and matched with our respective demographic group, the more likely it is that we'll be coerced into buying something online.

Our ability to look back and engage with the past is one unfortunate victim of such reification of thinking. Thus, amidst all the recent hysteria about the demise of forgetting in the era of social networking, it's the demise of reminiscence that I find deeply troublesome. The digital age presents us with yet another paradox: while we have nearly infinite space to store our memories as well as all the multi-purpose gadgets to augment them with GPS coordinates and 360-degree panoramas, we have fewer opportunities to look back and engage with those memories.

The bottomless reservoirs of the present have blinded us to the positive and therapeutic aspects of the past. For most of us, "re engaging with the past" today means nothing more than feeling embarrassed over something that we did years ago after it has unexpectedly resurfaced on social networks. But there is much more to reminiscence than the feeling of embarrassment. Studies show that there is an intricate connection between reminiscence (particularly about positive events in our lives) and happiness: the more we do of the former, the more we feel of the latter. Substituting links to our past with links to our Facebook profiles and Twitter updates risks turning us into hyperactive, depressive, and easily irritant creatures who do not know how to appreciate own achievements.

The "who" question â€" i.e. who gets to do the thinking in the digital age â€" is much trickier. The most obvious answer â€" that the Internet has democratized access to knowledge and we are all thinkers now, bowing over our keyboards much like the character of Rodin's famous sculpture â€" is wrong. One of my greatest fears is that the Internet would widen the gap between the disengaged masses and the over engaged elites, thus thwarting our ability to collectively solve global problems â€" climate change and the need for smarter regulation in the financial industry come to mind â€" that require everyone's immediate attention. The Internet may yield more "thinking" about such issues but such "thinking" would not be equally distributed.

The Marxists have been wrong on many issues but they were probably right about the reactionary views espoused by the "lumpenproletariat". Today we are facing the emergence of the "cyber-lumpenprolitariat", of people who are being sucked into the digital whirlwind of gossip sites, trashy video games, populist and xenophobic blogs, and endless poking on social networking sites. The intellectual elites, on the other hand, continue thriving in the new digital environment, exploiting superb online tools for scientific research and collaboration, streaming art house films via Netflix, swapping their favorite books via e-readers, reconnecting with musical treasures of the bygone eras via iTunes, and, above all, perusing materials in the giant online libraries like the one that Google could soon unveil. The real disparities between the two groups become painfully obvious once members of the cyber-lumpenproletariat head to the polls and push for issues of extremely dubious â€" if not outright unethical â€" nature (the recent referendum on minarets in Switzerland is a case in point; the fact that Internet users voted the legalization of marijuana as the most burning issue on Obama's change.gov site is another one).

As an aside, given the growing concerns over copyright and the digitization of national cultural heritage in many parts of the world, there is a growing risk that this intellectual cornucopia would be available only in North America, creating yet another divide. Disconnected from Google's digital library, even the most prestigious universities in Europe or Asia may look less appealing than even middling community colleges in the US. This may seem counterintuitive but it's increasingly likely that the Internet would not diffuse knowledge-production and thinking around the globe but rather further concentrate it in one place.

xeni_jardin's picture

Tech Culture Journalist; Partner, Contributor, Co-editor, Boing Boing; Executive Producer, host, Boing Boing Video

I travel regularly to places with bad connectivity. Small villages, marginalized communities, indigenous land in remote spots around the globe. Even when it costs me dearly, on a spendy satphone or in gold-plated roaming charges, my search-itch, my tweet twitch, my email toggle, those acquired instincts now persist.

The impulse to grab my iPhone or pivot to the laptop, is now automatic when I'm in a corner my own wetware can't get me out of. The instinct to reach online is so familiar now, I can't remember the daily routine of creative churn without it.

The constant connectivity I enjoy back home means never reaching a dead end. There are no unknowable answers, no stupid questions. The most intimate or not-quite-formed thought is always seconds away from acknowledgement by the great "out there."

The shared mind that is the Internet is a comfort to me. I feel it most strongly when I'm in those far-away places, tweeting about tortillas or volcanoes or voudun kings, but only because in those places, so little else is familiar. But the comfort of connectivity is an important part of my life when I'm back on more familiar ground, and take it for granted.

The smartphone in my pocket yields more nimble answers than an entire paper library, grand and worthy as the library may be. The paper library doesn't move with me throughout the world. The knowledge you carry with you is worth more than the same knowledge it takes more minutes, more miles, more action steps to access. A tweet query, a Wikipedia entry, a Googled text string, all are extensions of the internal folding and unfolding I used to call my own thought. But the thought process that was once mine is now ours, even while in progress, even before it yields a finished work.

That's how the Internet changed the way I think. I used to think of thought as the wobbly, undulating trail I follow to arrive at a final, solid, completed work. The steps you take to the stone marker at the end. But when the end itself is digital, what's to stop the work from continuing to undulate, pulsate, and update, just like the thought that brought you there?

I often think now in short bursts of thought, parsed out 140 characters at a time, or blogged in rough short form. I think aloud and online more, because the call and response is a comfort to me. I'm spoiled now, spoiled in the luxury of knowing there's always a ready response out there, always an inevitable ping back. Even when the ping back is sour or critical, it comforts me. It says "You are not alone."

I don't believe there's such a thing as too much information. I don't believe Google makes us dumber, or that prolonged Internet fasts or a return to faxes are a necessary part of mind health. But data without the ability to divine is useless. I don't trust algorithm like I trust intuition: the art of dowsing through data. Once, wisdom was measured by memory, by the capacity to store and process and retrieve on demand. But we have tools for that now. We made machines that became shared extensions of mind. How will we define wisdom now? I don't know, but I can ask.

andrew_lih's picture

Associate Professor of Journalism, American University; Author, The Wikipedia Revolution

What has changed my way of thinking is the ability of the Internet to support the deliberative aggregation of information, through filtering and refinement of independent voices, to create unprecedented works of knowledge.

Wikipedia is the greatest creation of massive collaboration so far. That we have a continuously updated, working draft of history that captures the state of human knowledge down to the granularity of each second is unique in the human experience.

Wikipedia, and now Twitter, as generic technical platforms have allowed participants to modify and optimize the virtual workspace to evolve new norms through cultural negotiation. With only basic general directives, participants implicitly evolve new community conventions through online stigmergic collaboration.

With the simple goal of writing an encyclopedia, Wikipedians developed guidelines regarding style, deliberation and conflict resolution while crafting community software measures to implement them. In the Twitter universe, retweeting and hashtags were organically crafted by users extended the "microblogging" concept to fit emerging community desires. This virtual blacksmithing in both the Wikipedia and Twitter workspaces support a form of evolvable media that is 'impossibly' supported by the Internet.

So far, our deep experiences with this form of collaboration have been in the domain of textual data. We see this also in journalistic endeavors that seek truth in public documents and records. News organizations such asTalking Points MemoÂandÂThe GuardianÂ(UK) have successfully mobilized the crowd to successfully tackle hundreds of thousands of pages of typically intractable data dumps. Mature text tools for searching, differential comparison and relational databases have made all this possible.

We have only started to consider the implications in the visual and multimedia domain. Today, we lack the sufficient tools to do so, but we will see more collaborative creation, editing and filtering of visual content and temporal media. Inevitably, the same creative stigmergic effect in the audio-visual domain from Internet-enabled collaboration will result in works of knowledge beyond our current imagination.

It is hard to predict exactly what they will be. But if you had asked me in 2000 whether something like Wikipedia was possible, I would have said absolutely not

judith_rich_harris's picture

Independent Investigator and Theoretician; Author, The Nurture Assumption; No Two Alike: Human Nature and Human Individuality

The Internet dispenses information the way a ketchup bottle dispenses ketchup. At first there was too little; now there is too much.

In between, there was a halcyon interval of just-enoughness. For me, it lasted about ten years.

They were the best years of my life.

daniel_l_everett's picture

Linguistic Researcher; Dean of Arts and Sciences, Bentley University; Author, How Language Began

I cannot use the Internet without thinking about the primitive research conditions I labored under during the late 1970s and early 1980s in the Brazilian Amazon, when I spent months at a time in complete isolation with the Pirahã people. My only connection with the wider world was a large and clunky Philips short-wave radio I bought in São Paulo. In the darkness of many Amazonian nights, I turned the volume low and listened, when all the Pirahãs and my family were asleep, to music shows like 'Rock Salad', to individual artists such as Joan Baez and Bob Dylan, and to news events like the Soviet invasion of Afghanistan and the election of Ronald Reagan. As much as I enjoyed my radio, though, I wanted to do more than just listen passively. I wanted to talk! I would lie awake after discovering some difficult grammatical or cultural fact and feel lost at times. I could barely wait to ask people questions about the data I was collecting in the village and my ideas about them. I couldn't, though. Too isolated. So I put thoughts of collaboration and consultation out of my head. Now this wasn't a completely horrible outcome. Isolation taught me to think independently. But there were times when I would have liked to have had a helping hand.

All that changed in 1999. I purchased a satellite phone with Internet capability. I could email from the Amazon! (And the US taxpayer would even foot the bill â€" I added the costs of connection time to my National Science Foundation budgets.)

Now I could read an article or a book in the Pirahã village and immediately contact the author. I learned that if you begin your email with, "Hi, I am writing to you from the banks of the Maici river in the Amazon jungle" you almost always get a response. I would send out half-baked ideas to colleagues and people I didn't even know around the world and get responses back quickly â€" sometimes while I was floating down the Maici river in my boat, drinking a beer, and relaxing from the demands of being the main entertainment for a village of practical-joking Pirahãs. After reading these responses I would discard some of my ideas, further develop others, and, most importantly, get brand new ones. I could not have telephoned all of my interlocutors. Most were too busy to take random phone calls from conversation-hungry Amazonianists. And I didn't know most of them all that well. Sending a regular letter was not possible from the Pirahã village. My thinking about language and culture were altered profoundly by access to fresh intellectual energy.

In the city from where I now do most of my work, the Internet has become an extension of my memory â€" it combats the occasional "senior moment", helping me to find names, facts, and places instantly (or so it seems). It gives me a second, bigger brain. The Internet has allowed me to learn from people I have never met. It placed me in a university that profoundly affected my career, my research, and my worldview.

I rarely connect to the Internet from the Amazon these days. I am not there as long or as frequently as in the past and so most of the time, I simply want to enjoy being with the people I am visiting. I have learned that the Internet is just a tool. It doesn't fit every job. I avoid using the Internet for tasks that require a more personal connection, such as administering my university department or talking to my children. But if it is just a tool, it is a wondrous tool. It changed my thinking (and my approach to thinking) like the first chainsaw must have affected loggers. The Internet gave me access to as much information (for good or ill) as any researcher in the world, even from the rain forest.

david_gelernter's picture

Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, America-Lite: How Imperial Academia Dismantled our Culture (and ushered in the Obamacrats)

The Internet is virtualizing the universe, which changes the way I act and think. "Virtualization" (a basic historical transition, like "industrialization") means that I spend more & more of my time acting-within and thinking about the mirror-reflection of some external system or institution in the (smooth, pond-like) surface of the Internet. But the continuum of the Cybersphere will emerge from today's bumpy cob-Web when Virtualization reaches the point at which the Internet develops its own emergent properties and systems: when we stop looking at the pixels (the many separate sites and services that make up the Web) and look at the picture. (It's the picture, not the pixels! Eventually top-down thinking will replace bottom-up engineering in the software worldâ€"which will entail roughly a 99.9% turnover in the current population of technologists.)

Conversation spaces, for example, will be simple emergent systems in the Cybersphere, where I talk and listen (or read and write) in a space containing people with whom I like to converse, with no preliminary set-up (so long as there's a computer nearby), as if I were in a room with friends. If I want someone's attention I say his name or look at him; if I speak a little louder, I'm seeking a general discussion. If I say "Let's talk about Jasper Johns," the appropriate group of people materializes. If one of them is busy, I can speak now & he can speak back to me later, & I can respond later still. (Some people claim to be good at multi-tasking; we'll see how many slow-motion conversations they can keep going simultaneously.)

Today there are many universities & courses online; eventually, as Virtualization progresses, we'll see many or most absorbed into a world-university where you can walk the halls, read the bulletin boards & peek into classrooms within a unified space â€" without caring which conventional university or Web site contributed what. We'll see new types of institutions and objects emerge, too; virtual objects and institutions will absorb their own histories (like cloth absorbing the fragrance of flowers), so I can visit Virtual Manhattan now or roll it backwards in time; a large subset of all the knowledge that exists about (say) Wells Cathedral is absorbed into the virtual or emergent Wells Cathedral. At Virtual Wells, I can dive deeper for detail about any aspect of the place, or roll the building (& its associated ideas and institutions) backwards in time until they vanish "into the mists of history"; or, for that matter, tentatively push it Virtual Wells forward in time (which is not so easy â€" like pushing something uphill), & see what can be calculated, forecast or guessed about the cathedral's future a day, a week or a thousand years from now.

Virtualization has the important intellectual side-effect of leading us towards a better understanding of the relation between emergent properties & virtual machines or systems. Thus "I" am an emergent property of my body & mind; "I" (my subjective experience of the world & myself) am a virtual machine, of sorts; but "I" (or "consciousness") am just as real (despite being virtual) as the pull-down menu built of software â€" or the picture that emerges from the pixels. Like industrialization, virtualization is an intellectual as well as a technological & economic transition; like industrialization, it's a change in the texture of time.

sam_harris's picture

Neuroscientist; Philosopher; Author, Making Sense

It is now a staple of scientific fantasy, or nightmare, to envision that human minds will one day be uploaded onto a vast computer network like the Internet. While I am agnostic about whether we will ever break the neural code, allowing our inner lives to be read out as a series of bits, I notice that the prophesied upload is slowly occurring in my own case. For instance, the other day I recalled a famous passage from Adam Smith that I wanted to cite: something about an earthquake in China. I briefly considered scouring my shelves in search of my copy of The Wealth of Nations. But I have thousands of books spread throughout my house, and they are badly organized. I recently spent an hour looking for a title, and then another skimming its text, only to discover that it wasn't the book I had wanted in the first place. And so it would have proved in the present case: for the passage I dimly remembered from Smith is to be found inÂThe Theory of Moral Sentiments. Why not just type the words "adam smith china earthquake" into Google? Mission accomplished.

Of course, more or less everyone has come to depend on the Internet in this way. Increasingly, however, I rely on Google to recall my own thoughts. Being lazy, I am prone to cannibalizing my work: something said in a lecture will get plowed into an op-ed; the op-ed will later be absorbed into a book; snippets from the book may get spoken in another lecture. This process will occasionally leave me wondering just how and where and to what shameful extent I have plagiarized myself. Once again, the gates of memory swing not from my own medial temporal lobes but from a computer cluster far away, presumably where the rent is lower.

This migration to the Internet now includes my emotional life. For instance, I occasionally engage in public debates and panel discussions where I am pitted against some over-, under-, or mis-educated antagonist. "How did it go?" will be the question posed by wife or mother at the end of the day. I now know that I cannot answer this question unless I watch the debate online â€" for my memory of what happened is often at odds with the later impression I form based upon seeing the exchange. Which view is closer to reality? I have learned to trust theÂYouTubeÂversion. In any case, it is the only one that will endure.

Increasingly, I develop relationships with other scientists and writers that exist entirely online. Jerry Coyne and I just met for the first time in a taxi in Mexico. But this was after having traded hundreds of emails. Almost every sentence we have ever exchanged exists in my Sent Folder. Our entire relationship is, therefore, searchable. I have many other friends and mentors who exist for me in this way, primarily as email correspondents. This has changed my sense of community profoundly. There are people I have never met who have a better understanding of what I will be thinking tomorrow than some of my closest friends do.

And there are surprises to be had in reviewing this digital correspondence. I recently did a search of my Sent Folder for the phrase "Barack Obama" and discovered that someone wrote to me in 2004 to say that he intended to give a copy of my first book to his dear friend, Barack Obama. Why didn't I remember this exchange? Because, at the time, I had no idea who Barack Obama was. Searching my bit stream, I am reminded not only of what I used to know, but of what I never properly understood.

I am by no means infatuated with computers. I do not belong to any social networking sites; I do not tweet (yet); and I do not post images to Flickr. But even in my case, an honest response to the Delphic admonition "know thyself" already requires an Internet search.

paul_w_ewald's picture

Professor of Biology, Amherst College; Author, Plague Time

When I was a kid in the early '60s my mother took me on weekly trips to the Wilmette Public Library. It was a well-stocked warren of interconnected sandy-brick buildings that grew in increments as Wilmette morphed from farmland to modest houses with vacant lots, to an upwardly mobile, bland, Chicago suburb, and finally to a pricey, bland, Chicago suburb. My most vivid memory of those visits was the central aisle, flanked by thousands of books reflecting glints of "modern" fluorescent lights from their crackly plastic covers. I decided to read them all. I began taking out five books each weekend with the idea that I would exchange them for another five a week later, and continue until the mission was accomplished. Fortunately for my adolescence, I soon realized a deflating fact: the library was acquiring more than five books per week.

The modern Internet has greatly increased the availability of information, both the valuable stuff and the flotsam. Using a conceptual compass a generalist can navigate the flotsam, to gain the depth of a specialist in many areas. The compass-driven generalist need no longer be dismissed as the Mississippi River, a mile wide and a foot deep.

My current fixation offers an illustration. I'm trying to develop a unified understanding of the causes of cancer. This goal may seem like a pipe-dream. Quick reference to the Internet seems to confirm this characterization. Plugging "cancer" into Google I got 173 million hits, most of them probably flotsam. Plugging cancer into PubMed I got 2.3 million scientific works. Some of these will be flotsam, but most have something of value. If I read 10 papers per day every day, I could read all 2.3 million papers in 630 years. These numbers are discouraging, but it gets worse. Pubmed tells me that in 2009 there were 280 articles on cancer published per day. Memories of the Wilmette Public Library loom large.

I navigate through this storm of information using my favorite conceptual compass: Darwin's theory of evolution by natural selection. Application of evolutionary principles often draws attention to paradoxes and flaws in arguments. These problems, if recognized, are often swept under the rug, but they become unavoidably conspicuous when the correct alternative argument is formulated. One of my research strategies is to identify medical conventional wisdom that is inconsistent with evolutionary principles. I then formulate alternative explanations that are consistent and then evaluate all of them with evidence.

In the case of cancer, expert opinion has focused on mutations that transform well-behaved cells into rogue cells. This emphasis (bias?) has been so narrow that experts have dismissed other factors as exceptions to the rule. But it raises a paradox: the chance of getting the necessary mutations without destroying the viability of the cell seems much too low to account for the widespread occurrence of cancers. Paramount among the cancer-inducing mutations are those that disrupt regulatory processes that have evolved to prevent damage from cancer and other diseases cell proliferation. One of these barriers to cancer is the arrest of cellular replication. Another is a cap on the total number of cell divisions. Still another is the tendency for cells to commit suicide when genetic damage is detected.

For a century, research has shown that infections can cause cancer. For most of this time this knowledge was roundly dismissed as applying only to nonhuman animals. Over the past thirty years, however, the connection between infection and human cancer has become ever stronger. In the 1970s most cancer experts concluded that infection could be accepted as a cause of no more than 1% of human cancer. Today infectious causes are generally accepted for about 20% of human cancer, and there's no end to this trend in sight.

When infections were first found to cause cancer, experts adjusted their perspective by the path of least resistance. They assumed that infections contribute to cancer because they increase mutation rate. An alternative view is that infectious agents evolve to sabotage the barriers to cancer. Why? Because barriers to cancer are also barriers to persistence within a host, particularly for viruses. By causing the cells they live in to divide in a precancerous state, viruses can survive and replicate below the immunological radar.

The depth of biological knowledge and the ability of the Internet to access this depth allows even a generalist to evaluate these two alternative explanations. Every cancer causing virus that has been well studied is known to sabotage these barriers. Additional mutations (some of the perhaps induced by infection) then finish the transformation to cancer.

Which viruses evolve persistence? This question is of critical practical importance because we are probably in the midst of determining full the scope of infection-induced cancer. Evolving an ability to commandeer host cells and drive them into a pre-cancerous state is quite a feat, especially for viruses, which tend to have only a dozen or so genes. To evolve mechanisms of persistence, viruses probably need a long time or very strong selective pressures over a short period of time. Evolutionary considerations suggest that transmission by sex or high-contact kissing could generate such strong selection, because the long intervals between changes in sex or kissing partners (for most people) places a premium on persistence within an individual. The literature on human cancer viruses confirms this idea--almost all are transmitted by kissing or by sex.

The extent to which this information improves quality and quantity of life will depend on whether people get access to it and alter their behavior to reduce their risk. The earlier the better, because exposure to these viruses rises dramatically soon after puberty. Luckily, kids now have broad access to information before they have access to sexual partners. It will be tougher for the rest of us who grew up before the modern Internet, in the primitive decades of the 20th century.

neil_gershenfeld's picture

Physicist, Director, MIT's Center for Bits and Atoms; Co-author, Designing Reality

The Internet is many things: good and bad (and worse) business models, techno-libertarian governance and state censors, information and misinformation, empowerment and addiction. But at heart it is the machine with the most parts ever created. What I've learned from the Internet comes not from Web 2.0 orÂanything else.0, it's the original insights from the pioneers that made its spectacular growth possible.

One is interoperability. While this sounds like technological motherhood and apple pie, it means that the Internet protocols are not the best choice for any particular purpose. They are, however, just good enough for most of them, and by sacrificing optimality the result has been a world of unplanned synergies.

A second is scalability. The Internet protocols don't contain performance numbers that impose assumptions about how they will be used, which has allowed their performance to be scaled over 6 orders of magnitude, far beyond anything initially anticipated. The only real exception to this was the address size, which is the one thing that's needed to be fixed.

Third is the end-to-end principle: the functions of the Internet are defined by what is connected to it, not by how it is constructed. New applications can be created without requiring anyone's approval, and can be implemented where information is created and consumed rather than centrally controlled.

And a fourth is open standards. The Internet's standards were a way to create playing fields, not score goals; from VHS vs Betamax to HD-DVD vs Blu-Ray, the only thing that's changed in standards wars has been who's sitting on which side of the table.

These simple-sounding ideas matter more than ever, because the Internet is now needed more than ever, but in places its never been. 3/4 of electricity is used by building infrastructure, which waste about a third of that, yet many of the attempts to make it intelligent hark back to the world of central office switches and dumb telephones. Some of the poorest people on the planet are "served" by some of the greediest telcos, while it's now possible to build communications infrastructure from the bottom up rather than the top down. In these and many more areas, four decades of Internet development are colliding with practices brought to us by (presumably) well-meaning but ill-informed engineers who don't study history as part of an engineering education, and thereby doom everyone else to repeat it. I'd argue that we already know the most important lessons of the Internet; what matters now is not finding them, but making sure we don't need to keep re-finding them.

daniel_haun's picture

Professor of Early Child Development and Culture, Director, Leipzig Research Center for Early Child Development, Leipzig University

I am born in 1977, or 15 b.I. if you like. That is if you take the 1992 version of the Internet to be the real thing. Anyway, I don't really remember being without it. When I first looked up, emerging out of the dark, quickly forgotten days of a sinister puberty, it was already there. Waiting for me. So it seems to me, it hasn't changed the way I think. Not in a before-after fashion anyway. But even if you are reading these lines through grey, long, uncontrollable eyebrow hair, let me reassure you, it hasn't changed the way you think either. Of course it changed the content of your thinking. Not just through the formidable availability of information you seek, but most importantly through the information you don't. But from what little I understand about human thought, I don't think the Internet has changed the way you think. It's architecture has not changed yours.

Let me try and give you an example of the way people think. The way you think. I have already told you three times that the Internet hasn't changed the way you think (4 and counting) and every time you are reading it, my statement becomes more believable to you. Psychologists have reported the human tendency to mistake repetition for truth for more than sixty years. This is called the "illusion of truth effect". You believe to be true what you hear often. The same applies to whatever comes to mind first or most easily.

People, including you, believe the examples they can think of right away to be most representative and therefore indicative of the truth. This is called the "availability heuristic". Let me give you a famous example. In English, what's the relative proportion of words that start with the letter K versus words that have the letter K in 3rd position? The reason most people believe the former to be more common than the latter is that they can easily remember a lot of words that start with a K, but few that have a K in the 3rd position. The truth in fact is that there are three times more words with K in third than in first position. Now if you don't believe people really do this, maybe because you don't, you just proved my point. Availability creates the illusion of truth. Repetition creates the illusion of truth. I would repeat that, but you get my point.

Let's reconsider the Internet. How do you find the truth on the Internet? You use a search engine. Search engines evidently have very complicated ways to determine which pages will be most relevant to your personal quest for the truth. But in a nutshell, a page's relevance is determined by how many other relevant pages link to it. Repetition, not truth. Your search engine will then present a set of ranked pages to you, determining availability. Repetition determines availability, and both together the illusion of truth. Hence, the Internet does just what you would do. It isn't changing the structure of your thinking, because it resembles it. It isn't changing the structure of your thinking, because it resembles it.Â

hu_fang's picture

Writer, Co-founder OF Vitamin Creative Space in Guangzhou and the shop in Beijing, China

I am particularly fond of this story: 7 men and 7 women who do not know one another, living in a glass house together for a month. Because their circumstances require that they sever all ties with their previous ways of life, they develop a brand new dynamic amongst themselves, and as a result, this sparks off the fundamental emotions of humankind â€" love, desire, passion and hatred.

During the first week, their caution with one another is evident. They make tentative attempts at communication, tapping on their past glories and social statuses to get into the good books of others. However, all that happens within the glass house is as convincing as empty promises. Gradually, they realise: the sole elements to victory are their own beings and the purity and simplicity of words; it is these things that are needed to reveal a "true self" to the other party.

Everything in this transparent and closed space is captured by the camera, and viewers from all over the country (including their own loved ones) are gathered around their televisions sets, watching their every move with intense interest, whipping out their cell-phones to send text messages.

At times, the participants wonder if they should seek help from the director, admit to their personal weaknesses, and then withdraw from the competition. But the lure of millions of dollars in prize money is irresistible (everyone has valid reasons for why they ought to win). They are also constrained by their sense of personal pride, hence no one would allow himself or herself to give up that easily. Some of them endure sleepless nights, and their loved ones â€" following their struggles as observed by the camera â€" consequently suffer the same insomnia with them. How difficult it was to make the right decision!

As required, each of them has to say a few words via the camera to their loved ones each day; most of the time, these revolve around their recollections on the past, realizations about life and confessions when their consciences are pricked. These in turn elicit widespread national tears. When the participants look right into the camera, and speak to their loved ones with deep emotions, in actual fact, they are gazing at the audience, confiding in them with great sentiment. Time and time again, this experience reiterates to them: what is important is not leaving good impressions on the opposite sex in that glass house, but rather, winning the favor of the audiences outside the glass house.

The participants' views are indistinct, and when projected beyond the glass house, are akin to messages sent from earth into the dark unknown that is outer space.

Finally, a pair amongst the participants kiss. Their profound love spur on another pair, unwilling to be left behind, to embrace each other. This incredibly lucid and protracted feature story drives their loved ones outside of the glass house to resort to smashing up their television sets in a bid to break that endless kiss.

The fragments of the television set are symbolic of the shattering of the glass house. Yet the image of the kissing lovers remains deeply seared into the minds of that man or that woman; it has become an indelible memory in their lives.

In my youth, I dreamed of becoming the director of that "tragicomic reality show". As the participants are wrapped up in their passionate embraces, I would have the shot cut to a series of personal, private spaces, to focus on the despair on the face of that man or woman sitting before the television.

ralph_gibson's picture

I believe that the history of time has been impacted by several enormous inventions. First was the watch which unified man's concept of measurement of time. It is interesting to note that China was the last country to join the rest of the world in embracing the clock. It was chairman Mao who brought in this drastic change, among others.

The invention of photography created several concrete displacements of our perception of the past. The world was quick to accept the photograph as a forcible document containing absolute evidence. This concept endured until sometime in the 1950s when the photograph was no longer accepted in courts of law.

From my point of view the next great watershed that influenced our perception of time has been the arrival of the Internet. I know that it certainly speeds things up etc. but beyond this obvious fact there seems to be much more to it as an experience. I believe that there is a metaphysical element that surely the mystics could define. But for me the most blatant phenomena is that my life has to an extent compressed to the extent that I am not only aging in the conventional sense but also not aging, due to the fact that rather than losing information with the passing of "time" I am in fact accruing more and more information.

Being a photographer for over 50 years has created an innate suspicion of cyber space but this superstition/suspicion does not interfere with my use of the Internet as a system of communication and research. I remain indifferent to the entire event of place as it is experienced by young arrivals to the planet who find the most concrete forms of reality floating upon the surface of their computer display.

I am not a luddite per se, in fact I own 4 or 5 computers at all times but prefer to use the machine for accessing the Net and  for book layout purposes. The idea of an Internet without some form of computer device is, for the time being, out of reach. Thus the Internet and the computer are married in some ethereal place, as yet undefined.

As an amateur musician I find the Internet linked in time with the nature of music itself. I imagine  the sound is compressed and sent through space it only to have it be uncompressed and sent back into space at a different wave form frequency.....music....I can hear it now.

Let me answer this question by recounting a personal story that took place 25 years ago in Kenya.

I was in Amboseli, National Park, Kenya to complete my PhD thesis on the development of vervet monkey behavior. I had never travelled to Africa. Kenya was my first exposure to the continent. I gradually learned Kiswahili, the local language. I learned it while playing on the local soccer team. I also learned another custom, one that started out as a shock to my male-ness, but soon became a lovely manner of interaction: holding hands while talking to good male friends. When I returned to the United States, and reached out to hold the hand a good buddy, I received a dirty look, followed by some lovely explicatives. I tried to explain that it was a way of connecting, and was not what he thought. Physical contact is good for us. I tucked this story away for years. It was resuscitated in Australia.

When we contact another human being? holding hands, touching a cheek? we are doing something that is evolutionarily ancient. Our primate ancestors did it all the time, and do it today: they groom. Yes, grooming removes bugs, but it has a massive social effect. It jazzes up the feel-good chemistry of the brain, the endorphins. Travel to a hunter-gather society, or watchÂNational Geographic, and you will witness people in contact. To contact is to connect.

Today, most of our connections are through the Internet. The closest haptic experience we have is with our keyboards or the magical glass of an iPhone. We Twitter, Facebook, Chat, IM, Google-Talk, and Skype. And there is even chatiquette to make sure we do it with, you know, appropriate decorum! As remarkable as these technologies are, and as wonderful as they are in enabling us to stay in touch with friends and family that live in other countries or even other states, they have caused a fundamental decline in our capacity for normal, face to face. They have, in a word, enabled us to be mindblind, insensitive to others' body language, to the way they hold themselves, and express feelings in an eyebrow or curled nose. Our capacity to connect through the Internet may be breeding a generation of social degenerates.

And online chatting is only one source of disconnect, of breaking the human physical bond. We now kill without seeing our enemies, running the show, as first witnessed in Desert Storm, by remote control, coordinated by private Internet links. The days of looking your enemy in the eye, and driving a knife into his body, are over! So too are we witnessing the decline of the hands' on doctor, the medical man of compassion. Surgeries are being handed over to robots. Of course, doctors control them today. But they no longer have to touch the patient. In fact, because of the Internet, a gifted surgeon in Boston can guide a beginner in Bangkok, without even meeting the patient, let alone touching his body.

Lest I be misunderstood, I do not have Webophobia, greatly profit from the Internet as a consummate informavore, and am a passionate one-click Amazonian. But our capacity to connect is causing a disconnect. Perhaps Web 3.0 will enable a function to virtually hold hands with our twitter friends.

christine_finn's picture

Archaeologist; Journalist; Author, Artifacts, Past Poetic

I saw in the new decade wrapped against the English Channel chill under one of the few surviving Timeball Towers in the world. It was hardly a Times Square ball-drop, but my personal nod to a piece of 18th century tech which was a part of communications history and ergo, a link to the Internet. For years this slim landmark signalled navigators off the White Cliffs of Dover to set their chronometers to Greenwich Mean Time. It was a Twitter ball with just one message to relay.

History is my way in this year. I am answering this year's Question against the deadline, as the answer slips as defiantly as time. The Internet has not only changed the way I think, but prompted me to think about those changes, over time, weighted by the uneven-ness of technology take-up and accessibility to the Net.

I encountered the Web as a researcher at Oxford in the mid-1990s. I learned later that I was at Tim Berners-Lee's former college, but I was pretty blase about being easily online. I saw the Internet as more a resource for messaging, a faster route than the bike-delivered pigeon post. I didn't see it as a tool for digging and remained resolutely buried in books. But when I visited non-academic friends and asked if I could check emails on their dial-ups, I began to equate the Net with privilege, via phone bill anxiety. As they hovered nervously, I dived in and out again. The Internet was not a joy, but a catch-up mechanism. And for a while, I couldn't think about it any other way.

In 2000, something happened. I found myself drawn to write a book about Silicon Valley. Moving frequently between the UK, and America's East and West Coasts, I began to think about the implications of the Internet and, moreover, about how not being able to get online was starting to affect me. What was I missing intellectually, and culturally by being sometimes out of the game. I began to appreciate a new hunger, for a technology which was still forming. I knew all that information was out there, and I couldn't realise its potential. Sometimes I believed ignorance was bliss. Travelling around America by bus and train for several months was a revelation. At every stop I tried to get online, which usually meant I waited in line. I relished my login gifts: a precious 30 minutes at New York Public Library, a whole hour at small towns in the mid-west, a grabbed few minutes in a university department before giving a lecture somewhere.

Then â€" joy! â€" luxuriating in the always-on technology at my friends' homes in the Bay Area, where even the kitchens had laptops panting to 'go search'. But as I made those flights east, the differential was widening. I lost hours trawling the streets of European cities for an Internet cafe, to feel it was merely a brushed kiss from a stranger; there's always be someone else in line. I had the taste and knew tech was building on tech out there in the ether. I was like some Woody Allen character, gazing out of an empty carriage window into a train full of revelers. Being barred from the Web felt like a personal blow; I'd lost the key to the library.

In 2004, I moved to Rome just as the tsunami was showing how the Internet could be mobilised for the good. I made my first ever post. I began my own blog, charting Rome's art and culture for Stanford's metamedia lab. The Pope was declining and by March, 2005, St.Peter's piazza was mushrooming with satellite dishes. In the Sistine Chapel, God and Adam were connecting on Michelangelo's ceiling, outside fingers were twitching on laptops and cellphones for one of the Internet's seminal news moments. But I heard the news the old fashioned way. Walking with a bag of warm pizza, I heard a sudden churning of bells, when it was not the marking of the hour. As I ran with the thousands to St.Peter's, I recall feeling moved by these parallel communications, where people could still be summoned by the bells. A few weeks later, watching wide screen TV in a Roman cafe, white smoke rose from the Vatican chimney. The ash drifted over the Vatican's ancient walls, morphing into a messaging cacophony of Italian cellphones, and clattering keyboards in heaving Internet cafes.

gerd_gigerenzer's picture

Psychologist; Director, Harding Center for Risk Literacy, Max Planck Institute for Human Development; Author, Risk Savvy

When I came to the Center for Advanced Study in Palo Alto in the fall of 1989, I peered into my new cabin-like office. What struck me was the complete absence of technology. No telephone, e-mail, or other communication facilitators. Nothing could interrupt my thoughts. Technology could be accessed outside the offices whenever one wished, but it was not allowed to enter through the door at its own will. This protective belt was deliberately designed to make sure that scholars had time to think, and to think deeply.

In the meantime, the Center, like other institutions, has surrendered to technology. Today, people's minds are in a state of constant alert, waiting for the next e-mail, the next SMS, as if these will deliver the final, earth-shattering insight. I find it surprising that scholars in the "thinking profession" would so easily let their attention be controlled from the outside, minute by minute, just like letting a cell phone interrupt a good conversation. Were messages to pop up on my screen every second, I would not be able to think straight. Maintaining the Center's spirit, I check my email only once a day, and keep my cell phone switched off unless I make a call. An hour or two without interruption are heaven for me.

But the Internet can be used in an active rather than a reactive way, that is, not letting it determine how long we can think and when we have to stop. The question is, does an active use of the Internet change our way of thinking? I believe so. The Internet shifts our cognitive functions from searching for information inside the mind towards searching outside the mind. It is not the first technology to do so.

Consider the invention that changed human mental life more than anything else: writing, and subsequently, the printing press. Writing made analysis possible; with writing, one can compare texts, which is difficult in an oral tradition. Writing also made exactitude possible, as in higher-order arithmetic â€" without any written form, these mental skills quickly meet their limits. But writing makes long-term memory less important than it once was, and schools have largely replaced the art of memorization by training in reading and writing.

Most of us can no longer memorize hour-long folktales and songs as in an oral tradition. The average modern mind has a poorly trained long-term memory, forgets rather quickly, and searches for information more in outside sources such as books instead inside memory. The Internet has amplified this trend of shifting knowledge from the inside to the outside, and taught us new strategies for finding what one wants using search machines.

This is not to say that before writing, the printing press, and the Internet, our minds did not have the ability to retrieve information from outside sources. But these sources were other people, and the skills were social, such as the art of persuasion and conversation. To retrieve information fromÂWikipedia, in contrast, social skills are no longer needed.

The Internet is essentially a huge storage room of information, and we are in the process of outsourcing information storage and retrieval from mind to computer, just as many of us have already outsourced the ability of doing mental arithmetic to the pocket calculator. We may loose some skills in this process, such as the ability to concentrate over an extended period of time and storing large amounts of information in long-term memory, but the Internet is also teaching us new skills for accessing information.

It is important to realize that mentality and technology are one extended system. The Internet is a kind of collective memory, to which our minds will adapt until a new technology eventually replaces it. Then we will begin outsourcing other cognitive abilities, and hopefully, learn new ones.

jesse_dylan's picture

Filmmaker; Founder, Wondros

The promise of the web when it was first kicked around at CERN and DARPA was to create a decentralized exchange of information. I think the grand power of that idea is that insight can come from literally anywhere. People with differing ideas and backgrounds can test their theories against the world, and at the end of it all: may the best idea win. That's powerful. The fact that the information can be looked at by so many different kinds of people from anywhere on Earth is the Internet's true power, and it's the source of my fascination with it. Right now a little kid can browse the raw data coming from the Large Hadron Particle Collider, he can search the stars for signals of alien life with the SETI project. Anyone can discover the next world-changing breakthrough. That's the point of the Internet.

Also, I think the contribution of search engines in simplifying the research process can't be under estimated. It gives me, and everybody else, the ability to conduct research instantly on our own terms. It's a tremendous leap from what I had to do 10 years ago to find anything out, from knowing who my interview subjects are to where I can get the best BLT in Hollywood, and still, I think the web is in it's infancy. The great hubs of information we've constructed, and the tools to traverse them, like Google, Wikipedia, and Facebook, are only going to get deeper and more resonant as we learn how to communicate over them more effectively. When our collective sources of knowledge improve, we will be better for it and our lives will be more meaningful. Just think about what we can do when these tools are applied to the world's of medicine, science, and art. I can't wait to see what a world full of instant knowledge and open inquiry will bring.

Today, the Internet permeates pretty much all of my thoughts and actions. I access it with my phone, my computer, at home, at work. It gives me untold quantities of new knowledge, inspiration, the ability to connect. I interact with people all over the world from different fields and walks of life, and I see myself and others becoming interconnected hubs of information that the full range of human experience passes through. With the Internet, I feel like I am never truly alone, with the very ends of the Earth a few clicks away.

I was talking with George Whiteshead not long ago about the way to approach innovation. Almost as an aside he said that the only way to make advances was to have five different strategies in the hopes that one would work out. Well the Internet is a place where I can pick from the sum of all strategies people have tried out beforehand, and if I think of something new, I can put it up there to share with the world.

I was at the Mayo clinic doing a film project on a rare condition called NMO. I heard the story about how the diagnostic test for this condition was discovered by accident. An MS doctor was speaking at a symposium and a Cancer researcher heard his results. This moment, by accident, led to the creation of the test. To me that's not an accident at all. It happened because someone, maybe the Mayo brothers themselves, put in place a system - making the symposium an event that disparate researchers and physicians would attend. The insight came because the platform made it possible for these people and ideas to come together and that made possible a better level of understanding, and so on and so forth.

When I was a child I learned from looking at the world and reading books. The knowledge I craved was hidden away. Much was secret and unavailable. In my youth, you had to dig deep and explore to find what you were looking for, and often what you wanted was locked up and out of reach. To get from Jack Kerouac to Hank Williams to the pentatonic scale used to be quite a journey. Now, it can happen in an instant. Some people would say that the old way was good thing, I disagree.

marti_hearst's picture

Computer Scientist, UC Berkeley, School of Information; Author, Search User Interfaces

In graduate school, as a computer scientist whose focus was on search engines even before the Web, I always dreamed of an Internet that would replace the inefficiencies of libraries, making all important information easily available online. This amazingly came to pass, despite what seemed like insurmountable blockages in the early days.

But something I did not anticipate is howÂsocial the Internet would become. When the Web took off, I expected to see recipes online. But today I also expect to learn what other people thought about a recipe, including what ingredients they added, what salad they paired it with and who in their family liked or disliked it. This multitude of perspectives has made me a better cook.

Now if I enjoy a television show, within minutes or hours of the air time of the latest episode, I expect to be able to take part in a delightful, informed conversation about it, anchored by an essay by a professional writer, supported with high-quality user-contributed comments that not only enhance my pleasure of the show, but also reveal new insights.

And I can not only get software online, but in the last few years a dizzying cornucopia of free software components have appeared, making it possible to do research and development in days that would have taken months or years in the past. There have always been online forums to discuss software â€" in fact, coding was unsurprisingly one of the most common topics of early online groups. But the variety and detail of the kind of information that other people selflessly supply each other with today is staggering. And the design of online question-answering sites has moved from crufty to excellent in just a few years.

Most relevant to the scientists and researchers who contribute to theÂEdgequestion, we see the use of the Web to enhance communication in the virtual college, with academic meetings being held online, math proofs being done collaboratively on blogs, and deadly viruses being isolated within weeks by research labs working together online.

Sure, we used email in the early eighties, and there were online bulletin boards for at least a decade before the Web, but only a small percentage of the population used them, and usually over a very slow modem. In the early days of the Web, ordinary people's voices were limited primarily to information ghettos like Geocities; most text was produced by academics and businesses. There was very little give-and-take. By contrast, according to a 2009 Pew study, 51% of Internet users now post content online that they have created themselves, and 1 in 10 Americans post something online for others to see every day.

Of course, the increased participation means that there is an increase in the equivalent of what we used to call flame wars, or generally rude behavior, as well as a proliferation of false information and gathering places for people to plan and encourage hurtful activities. Some people think this ruins the Web, but I disagree. It's what happens when everyone is there.

Interestingly, theÂEdge Question, while innovative in format when it started, still does not allow readers to comment on the opinions offered. I am not saying if this is a good or a bad thing. The Edge Foundation's goal is to increase public understanding of science by encouraging intellectuals to "express their deepest thoughts in a manner accessible to the intelligent reading public." I just wonder if it is time to embrace the new Internet and let that public write back.

eric_fischl's picture

As visual artists, we might rephrase the question as something like: How has the Internet changed the way we see?

For the visual artist, seeing is essential to thought. It organizes information and how we develop thoughts and feelings. It's how we connect.

So how has the Internet changed us visually? The changes are subtle yet profound. They did not start with the computer. The changes began with the camera and other film-based media, and the Internet has had an exponential effect on that change.

The result is a leveling of visual information, whereby it all assumes the same characteristics. One loss is a sense of scale. Another is a loss of differentiation between materials, and the process of making. All visual information "looks" the same, with film/photography being the common denominator.

Art objects contain a dynamism based on scale and physicality that produces a somatic response in the viewer. The powerful visual experience of art locates the viewer very precisely as an integrated self within the artist's vision. With the flattening of visual information and the randomness of size inherent in reproduction, the significance of scale is eroded. Visual information becomes based on image alone. Experience is replaced with facsimile.

As admittedly useful as the Internet is, easy access to images of everything and anything creates a false illusion of knowledge and experience. The world pictured as pictures does not deliver the experience of art seen and experienced physically. It is possible for an art-experienced person to "translate" what is seen online, but the experience is necessarily remote.

As John Berger pointed out, the nature of photography is a memory device that allows us to forget. Perhaps something similar can be said about the Internet. In terms of art, the Internet expands the network of reproduction that replaces the way we "know" something. It replaces experience with facsimile.

ian_gold's picture

Neuroscientist; Canada Research Chair in Philosophy & Psychiatry, McGill University; Coauthor (with Joel Gold), Suspicious Minds

The social changes the Internet is bringing about have changed the way the two of us think about madness. The change in our thinking started, strangely enough, with reflections on Internet friends. The number of your Facebook friends, like the make of the car you drive, confers a certain status. It is not uncommon for someone to have virtual friends in the hundreds which seems to show, among other things, that the Internet is doing more for our social lives than wine coolers or the pill. In the days before Facebook and Twitter, time placed severe constraints on friendship. Even the traditional Christmas letter, now a fossil in the anthropological museum, couldn't be stamped and addressed 754 times by anybody with a full-time job. Technology has transcended time and made the Christmas letter viable again no matter how large one's social circle. Ironically, electronic social networking has made the Christmas letter otiose; your friends hardly need an account of the year's highlights when they can be fed a stream of reports on the day's events and your reflections on logical positivism or Lady Gaga.

It's hard to doubt that more friends are a good thing, friendship being among life's greatest boons. As Aristotle put it, "without friends no one would choose to live, though he had all other goods." But of course friends are only as good as they are genuine, and it is hard to know what to think about Facebook friends. This familiar idea was made vivid to us recently by a very depressed young woman who came to see one of us for the first time. Among the causes of her depression, she said, was that she had no friends. Sitting on her psychiatrist's couch, desperately alone, she talked; and while she talked, she Twittered. Perhaps she was simply telling her Twitter friends that she was in a psychiatrist's office; perhaps, she was telling them that she was talking to her psychiatrist about having no real friends; and perhaps â€" despite her protestations to the contrary â€" she was getting some of friendship's benefits by having a virtual community. In the face of this striking contrast between the real and the virtual, however, it's hard not to think that a Facebook or Twitter friend is not quite what Aristotle had in mind.

Still, one probably shouldn't make too much of this. Many of the recipients of the Christmas letter wouldn't have been counted as friends, in Aristotle's sense, either. There is a distinction to be made between one's friends, and one's social group, a much larger community, which might include the Christmas letter people, the colleagues one floor below, or the family you catch up with at Bar Mitzvahs and funerals. Indeed, the Internet is also creating a hybrid social group that includes real friends and the friends-of-friends who are little more than strangers. Beyond these, many of us are also interacting with genuine strangers in chat rooms, virtual spaces, and second lives.

In contrast with friendship, however, an expanded social group is unlikely to be an unalloyed good because it is hardly news that the people in our lives are the sources not only of our greatest joys but also our most profound suffering. The sadistic boss can blight an existence however full of affection from others, and the sustaining spouse can morph into That Cheating Bastard. A larger social group is thus a double-edged sword, creating more opportunities for human misery as well as satisfaction. A hybrid social group that includes near-strangers and true strangers may also open to the door to real danger.

This mixed blessings of social life seem to have been writ large in our evolutionary history. The last time social life expanded as significantly as it has in the last couple of years was before there were any humans. The transition from non-primates to primates came with an expansion of social groups, and many scientists now think that the primate brain evolved under the pressures of this novel form of social life. With a larger social group there are more opportunities for cooperation and mutual benefit, but there are also novel threats. Each member of a social group will get more food if they hunt together, for example, than they would get hunting by alone, but they also expose themselves to free riders who take without contributing. By living in larger social groups, the physical environment is more manageable, but deception and social exploitation emerge as new dangers. Since both cooperation and competition are cognitively demanding, those with bigger brains â€" and the concomitant brain power â€" will have the advantage in both. The evolution of human intelligence may thus been driven primarily by the kindness and the malice of others.

Some of the best evidence for this idea is that there is a relation in primates between brain size (more precisely, relative neocortical volume) and the size of the social group in which the members of the species live: bigger brain, bigger group. Plotting social group as a function of brain size in primates allows us to extrapolate to humans. The anthropologist, Robin Dunbar, calculated that the volume of the human cortex predicts a social group of 150 â€" about the size of the villages that would have constituted our social environment for a great deal of evolutionary time, and which can still be found in "primitive" societies.

How could one test this hypothesis? In non-human primates, membership in a social group is typically designated by mutual grooming. Outside of hairdressing colleges and teenage-girl-sleep-overs, this isn't a very useful criterion for humans. But the Christmas letter (or card) does better. Getting a Christmas card is a minimal indicator of membership in someone's social group. In an ingenious experiment, Dunbar asked subjects to keep a record of all the Christmas cards they sent. Depending on how one counted, the number of card recipients was somewhere between 125 and 154, just about the right number for our brains. It appears, then, that over the course of millions of years of human history our brains have been tuned to the social opportunities and threats presented by groups of 150 or so. The Internet has turned the human village into a megalopolis and has thus inaugurated what might be the biggest sea-change in human evolution since the primeval campfires.

We come at last to madness. Psychiatry has known for decades that the megalopolis â€" indeed a city of any size â€" breeds psychosis. In particular, schizophrenia, the paradigm of a purely biological mental illness, becomes more prevalent as city size increases, even when the city is hardly more than a village. And this is the case not because mental illness in general becomes more common in cities; nor is it true that people who are psychotic tend to drift toward cities or stay in them. In creating much larger social groups for ourselves, ranging from true friends to near-strangers, could we be laying the ground for a pathogenic virtual city in which psychosis will be on the rise? Or will Facebook and Twitter draw us closer to friends in Aristotle's sense who can act as psychic prophylaxis against the madness-making power of others? Whatever the effects of the Internet on our inner lives, it seems clear that in changing the structure of our outer lives â€" the lives intertwined with those of others â€" the Internet is likely to be a more potent shaper of our minds than we have begun to imagine.

esther_dyson's picture

Investor; Chairman, EDventure Holdings; Executive Founder, Wellville; Author: Release 2.0

I love the Internet. It's a great tool precisely because it is so content â€" and value-free. Anyone can use it for his own purposes, good or bad, big or small, trivial or important. It impartially transmits all kinds of content, one-way or two-way or broadcast, public or private, text or video or sound or data.

But it does have one overwhelming feature: immediacy. (And when the immediacy is ruptured, its users gnash their teeth.) That immediacy is seductive: You can get instant answers, instant responses. If you're lonely, you can go online and find someone to chat with. If you want business, you can send out an e-mail blast and get at least a few responses â€" a .002 response rate means 200 messages back (including some hate mail) for a small list. If you want to do good, there are thousands of good causes competing for your attention at the click of your mouse.

But sometimes I think much of what we get on the Internet is empty calories. It's sugar â€" short videos, pokes from friends, blog posts, Twitter posts (even blogs seem longwinded now), pop-ups and visualizations…Sugar is so much easier to digest, so enticing…and ultimately, it leaves us hungrier than before.

Worse than that, over a long period, many of us are genetically disposed to lose our capability to digest sugar if we consume too much of it. It makes us sick long-term, as well as giving us indigestion and hypoglycemic fits. Could that be true of information sugar as well? Will we become allergic to it even as we crave it? And what will serve as information insulin?

In the spirit of brevity if not immediacy, I leave it to the reader to ponder these questions.

virginia_heffernan's picture

Columnist, The New York Times Magazine; Editorial Director, West Studios; Author, Magic and Loss

People who study the real world, including historians and scientists, may find that the reality of the Internet changes how they think. But those of us who study symbolic systems, including philosophers and literary critics, find in the Internet another yet another symbolic system, albeit a humdinger, that yields — spectacularly, I must say — to our accustomed modes of inquiry.

Anyway, a new symbolic order need not disrupt Truth, wherever Truth may now be said to reside (Neurons? Climate change? Atheism?). Certainly to those of us who read more novels than MRIs, the Internet — and especially the World Wide Web —looks like what we know: a fictional world made mostly of words.

Philosophers and critics must only be careful, as we are trained to be careful, not to mistake this new, highly stylized and artificial order, the Internet, for reality itself. After all, all cultural forms and conceits that gain currency and influence — epic poetry, the Catholic mass, the British empire, photography —do so by purporting to be reality, to be transparent, to represent or proscribe life as it really is. As an arrangement of interlocking high, pop and folk art forms, the Internet is no different. This ought to be especially clear when what's meant by "the Internet" is that mostly comic, intensely commercial bourgeois space known as the World Wide Web.

We who have determinedly kept our heads while suffrage, the Holocaust, the highway system, Renaissance perspective, coeducation, the Pill, household appliances, the moon landing, the Kennedy assassination and rock 'n' roll were supposed to change existence forever, cannot falter now. Instead of theatrically changing our thinking, this time, we must keep our heads, which means — to me — that we must keep on reading and not mistake new texts for new worlds, or new forms for new brains.

anthony_aguirre's picture

Professor of Physics, University of California, Santa Cruz; Author, Cosmological Koans

As visual artists, we might rephrase the question as something like: How has the Internet changed the way we see?

For the visual artist, seeing is essential to thought. It organizes information and how we develop thoughts and feelings. It's how we connect.

So how has the Internet changed us visually? The changes are subtle yet profound. They did not start with the computer. The changes began with the camera and other film-based media, and the Internet has had an exponential effect on that change.

The result is a leveling of visual information, whereby it all assumes the same characteristics. One loss is a sense of scale. Another is a loss of differentiation between materials, and the process of making. All visual information "looks" the same, with film/photography being the common denominator.

Art objects contain a dynamism based on scale and physicality that produces a somatic response in the viewer. The powerful visual experience of art locates the viewer very precisely as an integrated self within the artist's vision. With the flattening of visual information and the randomness of size inherent in reproduction, the significance of scale is eroded. Visual information becomes based on image alone. Experience is replaced with facsimile.

As admittedly useful as the Internet is, easy access to images of everything and anything creates a false illusion of knowledge and experience. The world pictured as pictures does not deliver the experience of art seen and experienced physically. It is possible for an art-experienced person to "translate" what is seen online, but the experience is necessarily remote.

As John Berger pointed out, the nature of photography is a memory device that allows us to forget. Perhaps something similar can be said about the Internet. In terms of art, the Internet expands the network of reproduction that replaces the way we "know" something. It replaces experience with facsimile.

joel_gold's picture

Psychiatrist; Clinical Associate Professor of Psychiatry, NYU School of Medicine; Coauthor (with Ian Gold), Suspicious Minds

The social changes the Internet is bringing about have changed the way the two of us think about madness. The change in our thinking started, strangely enough, with reflections on Internet friends. The number of your Facebook friends, like the make of the car you drive, confers a certain status. It is not uncommon for someone to have virtual friends in the hundreds which seems to show, among other things, that the Internet is doing more for our social lives than wine coolers or the pill. In the days before Facebook and Twitter, time placed severe constraints on friendship. Even the traditional Christmas letter, now a fossil in the anthropological museum, couldn't be stamped and addressed 754 times by anybody with a full-time job. Technology has transcended time and made the Christmas letter viable again no matter how large one's social circle. Ironically, electronic social networking has made the Christmas letter otiose; your friends hardly need an account of the year's highlights when they can be fed a stream of reports on the day's events and your reflections on logical positivism or Lady Gaga.

It's hard to doubt that more friends are a good thing, friendship being among life's greatest boons. As Aristotle put it, "without friends no one would choose to live, though he had all other goods." But of course friends are only as good as they are genuine, and it is hard to know what to think about Facebook friends. This familiar idea was made vivid to us recently by a very depressed young woman who came to see one of us for the first time. Among the causes of her depression, she said, was that she had no friends. Sitting on her psychiatrist's couch, desperately alone, she talked; and while she talked, she Twittered. Perhaps she was simply telling her Twitter friends that she was in a psychiatrist's office; perhaps, she was telling them that she was talking to her psychiatrist about having no real friends; and perhaps â€" despite her protestations to the contrary â€" she was getting some of friendship's benefits by having a virtual community. In the face of this striking contrast between the real and the virtual, however, it's hard not to think that a Facebook or Twitter friend is not quite what Aristotle had in mind.

Still, one probably shouldn't make too much of this. Many of the recipients of the Christmas letter wouldn't have been counted as friends, in Aristotle's sense, either. There is a distinction to be made between one's friends, and one's social group, a much larger community, which might include the Christmas letter people, the colleagues one floor below, or the family you catch up with at Bar Mitzvahs and funerals. Indeed, the Internet is also creating a hybrid social group that includes real friends and the friends-of-friends who are little more than strangers. Beyond these, many of us are also interacting with genuine strangers in chat rooms, virtual spaces, and second lives.

In contrast with friendship, however, an expanded social group is unlikely to be an unalloyed good because it is hardly news that the people in our lives are the sources not only of our greatest joys but also our most profound suffering. The sadistic boss can blight an existence however full of affection from others, and the sustaining spouse can morph into That Cheating Bastard. A larger social group is thus a double-edged sword, creating more opportunities for human misery as well as satisfaction. A hybrid social group that includes near-strangers and true strangers may also open to the door to real danger.

This mixed blessings of social life seem to have been writ large in our evolutionary history. The last time social life expanded as significantly as it has in the last couple of years was before there were any humans. The transition from non-primates to primates came with an expansion of social groups, and many scientists now think that the primate brain evolved under the pressures of this novel form of social life. With a larger social group there are more opportunities for cooperation and mutual benefit, but there are also novel threats. Each member of a social group will get more food if they hunt together, for example, than they would get hunting by alone, but they also expose themselves to free riders who take without contributing. By living in larger social groups, the physical environment is more manageable, but deception and social exploitation emerge as new dangers. Since both cooperation and competition are cognitively demanding, those with bigger brains â€" and the concomitant brain power â€" will have the advantage in both. The evolution of human intelligence may thus been driven primarily by the kindness and the malice of others.

Some of the best evidence for this idea is that there is a relation in primates between brain size (more precisely, relative neocortical volume) and the size of the social group in which the members of the species live: bigger brain, bigger group. Plotting social group as a function of brain size in primates allows us to extrapolate to humans. The anthropologist, Robin Dunbar, calculated that the volume of the human cortex predicts a social group of 150 â€" about the size of the villages that would have constituted our social environment for a great deal of evolutionary time, and which can still be found in "primitive" societies.

How could one test this hypothesis? In non-human primates, membership in a social group is typically designated by mutual grooming. Outside of hairdressing colleges and teenage-girl-sleep-overs, this isn't a very useful criterion for humans. But the Christmas letter (or card) does better. Getting a Christmas card is a minimal indicator of membership in someone's social group. In an ingenious experiment, Dunbar asked subjects to keep a record of all the Christmas cards they sent. Depending on how one counted, the number of card recipients was somewhere between 125 and 154, just about the right number for our brains. It appears, then, that over the course of millions of years of human history our brains have been tuned to the social opportunities and threats presented by groups of 150 or so. The Internet has turned the human village into a megalopolis and has thus inaugurated what might be the biggest sea-change in human evolution since the primeval campfires.

We come at last to madness. Psychiatry has known for decades that the megalopolis â€" indeed a city of any size â€" breeds psychosis. In particular, schizophrenia, the paradigm of a purely biological mental illness, becomes more prevalent as city size increases, even when the city is hardly more than a village. And this is the case not because mental illness in general becomes more common in cities; nor is it true that people who are psychotic tend to drift toward cities or stay in them. In creating much larger social groups for ourselves, ranging from true friends to near-strangers, could we be laying the ground for a pathogenic virtual city in which psychosis will be on the rise? Or will Facebook and Twitter draw us closer to friends in Aristotle's sense who can act as psychic prophylaxis against the madness-making power of others? Whatever the effects of the Internet on our inner lives, it seems clear that in changing the structure of our outer lives â€" the lives intertwined with those of others â€" the Internet is likely to be a more potent shaper of our minds than we have begun to imagine.

george_dyson's picture

Science Historian; Author, Analogia

In the North Pacific ocean, there were two approaches to boatbuilding. The Aleuts (and their kayak-building relatives) lived on barren, treeless islands and built their vessels by piecing together skeletal frameworks from fragments of beach-combed wood. The Tlingit (and their dugout canoe-building relatives) built their vessels by selecting entire trees out of the rainforest and removing wood until there was nothing left but a canoe.

The Aleut and the Tlingit achieved similar results — maximum boat / minimum material— by opposite means. The flood of information unleashed by the Internet has produced a similar cultural split. We used to be kayak builders, collecting all available fragments of information to assemble the framework that kept us afloat. Now, we have to learn to become dugout-canoe builders, discarding unneccessary information to reveal the shape of knowledge hidden within.

I was a hardened kayak builder, trained to collect every available stick. I resent having to learn the new skills. But those who don't will be left paddling logs, not canoes.

w_daniel_hillis's picture

Physicist, Computer Scientist, Co-Founder, Applied Invention.; Author, The Pattern on the Stone

It seems that most people, even intelligent and well-informed people, are confused about the difference between the Internet and the Web. No one has expressed this misunderstanding more clearly than Tom Wolfe inHooking Up:

I hate to be the one who brings this news to the tribe, to the magic Digikingdom, but the simple truth is that the Web, the Internet, does one thing. It speeds up the retrieval and dissemination of information, partially eliminating such chores as going outdoors to the mailbox or the adult bookstore, or having to pick up the phone to get hold of your stock broker or some buddies to shoot the breeze with. That one thing the Internet does and only that. The rest is Digibabble.

This confusion between the network and the services that it first enabled is a natural mistake. Most early customers of electricity believed that they were buying electric lighting. That first application was so compelling that it blinded them to the bigger picture of what was possible. A few dreamers speculated that electricity would change the world, but one can imagine a nineteenth-century curmudgeon attempting to dampen their enthusiasm: "Electricity is a convenient means to light a room. That one thing the electricity does and only that. The rest is Electrobabble."

The Web is a wonderful resource for speeding up the retrieval and dissemination of information and that, despite Wolfe's trivialization, is no small change. Yet, the Internet is much more than just the Web. I would like to discuss some of the less apparent ways that it will change us. By the Internet, I mean the global network of interconnected computers that enables, among other things, the Web. I would like to focus on applications that go beyond human-to-human communication. In the long run, these are applications of the Internet that will have the greatest impact on who we are and how we think.

Today, most people only recognize that they are using the Internet when they are interacting with a computer screen. They are less likely to appreciate when they are using the Internet while talking on the telephone, watching television, or flying on an airplane. Some travelers may have recently gotten a glimpse of the truth, for example, upon learning that their flights were grounded due to an Internet router failure in Salt Lake City, but for most this was just another inscrutable annoyance. Most people have long ago given up on trying to understand how technical systems work. This is a part of how the Internet is changing the way we think.

I want to be clear that I am not complaining about technical ignorance. In an Internet-connected world, it is almost impossible to keep track of how systems actually function. Your telephone conversation may be delivered over analog lines one day and by the Internet the next. Your airplane route may be chosen by a computer or a human being, or (most likely) some combination of both. Don't bother asking, because any answer you get is likely to be wrong.

Soon, no human will know the answer. More and more decisions are made by the emergent interaction of multiple communicating systems, and these component systems themselves are constantly adapting, changing the way they work. This is the real impact of the Internet: by allowing adaptive complex systems to interoperate, the Internet has changed the way we make decisions. More and more, it is not individual humans who decide, but an entangled, adaptive network of humans and machines.

To understand how the Internet encourages this interweaving of complex systems, you need to appreciate how it has changed the nature of computer programming. Back in the twentieth century, a programmer had the opportunity to exercise absolute control within a bounded world with precisely defined rules. They were able to tell their computers exactly what to do. Today, programming usually involves linking together complex systems developed by others, without understanding exactly how they work. In fact, depending upon the methods of other systems is considered poor programming practice, because it is expected that they will change.

Consider as a simple example, a program that needs to know the time of day. In the unconnected world, computers often asked the operator to type in the time when they were powered on. They then kept track of passing time by counting ticks of an internal clock. Programmers often had to write their own program to do this, but in any case, they understood exactly how it worked. Once computers became connected through the Internet, it made more sense for computers to find out the time by asking one another, so something called Network Time Protocol was invented. Most programmers are aware that it exists but few understand it in detail. Instead, they call a library routine, which asks the operating system, which automatically invokes the Network Time Protocol when it is required.

It would take a long time to explain Network Time Protocol, how it corrects for variable network delays and how it takes advantage of a partially-layered hierarchy of network-connected clocks to find the time. Suffice it to say that it is complicated. Besides, I would be describing version 3 of the protocol, and your operating system is probably already using version 4. It really does not make sense for you, even if you are a programmer, to bother to understand how it works.

Now consider a program that is directing delivery trucks to restock stores. It needs to know not just the time of day, but also the locations of the trucks in the fleet, the maps of the streets, the coordinates of its warehouses, the current traffic patterns, and the inventories of its stores. Fortunately it can keep track of all of this changing information by connecting to other computers through the Internet. It can also offer services to other systems that need to track the location of packages, pay drivers, and schedule maintenance of the trucks. All of these systems will depend upon one another to provide information, without depending on exactly how the information is computed. All of these communicating systems are being constantly improved and extended, evolving in time.

Now multiply this picture by a million fold, to include not just the one fleet of trucks, but all the airplanes, gas pipelines, hospitals, factories, oil refineries, mines and power plants not to mention the salesmen, advertisers, media distributors, insurance companies, regulators, financiers and stock traders. You will begin to perceive the entangled system that makes so many of our day-to-day decisions. Although we created it, we did not exactly design it. It evolved. Our relationship to it is similar to our relationship to our biological ecosystem. We are co-dependent, and not entirely in control.

We have embodied our rationality within our machines and delegated to them many of our choices, and in this process we have created a world that is beyond our own understanding. Our century began on a note of uncertainty, as we worried how our machines would handle the transition to the new millennium. Now we are attending to a financial crisis caused by the banking system miscomputing risks, and a debate on global warming in which experts argue not so much about the data, but about what the computers predict from the data. We have linked our destinies, not only among ourselves across the globe, but with our technology. If the theme of the Enlightenment was independence, our own theme is interdependence. We are now all connected, humans and machines. Welcome to the dawn of the Entanglement.

helen_fisher's picture

Biological Anthropologist, Rutgers University; Author, Why Him? Why Her? How to Find and Keep Lasting Love

For me, the Internet is a return to yesteryear; it simply allows me (and all the rest of us) to think and behave in ways for which we were built long long ago. Take love. For millions of years, our forebears traveled in little hunting and gathering bands. About 25 individuals lived together day and night; some ten to twelve were children and adolescents; the balance were adults. But everyone knew just about everybody else in a neighborhood of several hundred miles. They got together too. Annually in the dry season, bands congregated at the permanent waters that dotted eastern and southern Africa. Here as many as 500 men, women and children would mingle, chat, dine, dance, perhaps even worship â€" together. And although a pubescent girl who saw a cute boy at the next campfire might not know him personally, her mother probably knew his aunt or her older brother had hunted with his cousin. All were part of the same broad social Web.

Moreover, in the ever-present gossip circles, a young girl could easily collect data on a potential suitor's hunting skills, even whether he was amusing, kind or smart. We think it's natural to court a totally unknown person in a bar or club. But it's far more natural to know a few basic things about an individual before meeting him or her. Internet dating sites, chat rooms, social networking sites provide these details, enabling the modern human brain to pursue more comfortably its ancestral mating dance.

Then there's the issue of privacy. Some are mystified by the way others, particularly the young, so frivolously reveal their intimate lives on Facebook, Twitter, in emails and via other Internet billboards. This odd human habit has even spilled into our streets and other public places. How many times have you had to listen to someone nonchalantly blare out their problems on cell phones while you sat on a train or bus. Yet for millions of years our forebears had almost no privacy. With the Internet, we are returning to this practice of shared community.

So for me, the Internet has only magnified â€" on a grand scale â€" what I already knew about human nature. Sure, with "the Net," I more easily and rapidly acquire information than in the old days. I can more easily sustain connections with colleagues, friends and family. I no longer take long walks to the post office to mail manuscripts. I don't pound on typewriter keys all day, or use "white-out." My box of carbon paper is long gone. And sometimes I find it easier to express complex or difficult feelings via email than in person or on the phone. But my writing isn't any better…or worse. My perspectives haven't broadened…or narrowed. My values haven't altered. I have just as much data to organize. My energy level is just the same. My workload has probably increased. And colleagues want what they want from me even faster. My daily habits have changed â€" moderately.

But the way I think? I don't think any harder, faster, longer, or more effectively than I did before I bought my first computer in 1985. In fact, the rise of the Internet only reminds me of how little any of us have changed since the modern human brain evolved more than 35,000 years ago. We are still the same warlike, peace loving, curious, gregarious, proud, romantic, opportunistic â€" and naïve â€" creatures we were before the Internet, indeed before the automobile, the radio, the Civil War, or the ancient Sumerians. We still have the same brain our forebears had as they stalked woolly mammoths and mastodons; and we still chat and warm our hands where they once camped â€" on land that is now London, Beijing and New York. With the Internet, we just have a much louder megaphone with which to scream who we really are.

nigel_goldenfeld's picture

Physicist, University of Illinois at Urbana-Champaign

Although I used the Internet back when it was just Arpanet, and even earlier as a teenager using a teletype to log into a state-of-the-art Honeywell mainframe from my school, I don't believe my way of thinking was changed by the Internet until around 2000. Why not?

The answer, I suspect, is the fantastic benefit that comes from massive connectivity and the resulting emergent phenomena. Back in my school days, the Internet was linear, predictable, and boring. It never talked back. When I hacked into the computer at MIT running an early symbolic manipulator program, something that could do algebra in a painfully inadequate way, I just used the Internet as a perfectly predictable tool. In my day-to-day life as a scientist, I mostly still do.

Back in 1996, I co-founded a software company that built its products and operated essentially entirely through the Internet; whether this was more efficient than a regular "bricks-and-mortar" company is debatable, but the fact was that through this medium, fabulously gifted individuals were able to participate in this experiment, who would never have dreamed of relocating for work like this. But this was still linear, predictable, and an essentially uninteresting use of the Internet.

No, for me, the theoretical physicist geek from central casting, the Internet is changing the way I think, because its "whole is greater than the sum of its parts". When I was a child, they told us that we would be living on the moon, that we would have anti-gravity jet packs, and video phones. They lied about everything but the video phones. With private blogs, Skype and a $40 Webcam, I can collaborate with my colleagues, write equations on my blackboard, and built networks of thought that stagger me with their effectiveness. My students and I work together so effectively through the Internet that its always-on-library dominates our discussions and helps us find the sharp questions that drive our research and thinking infinitely faster than before.

My day job is to make discoveries through thought, principally by exploiting analogies through acts of intellectual arbitrage. When we find two analogous questions in what were previously perceived to be unrelated fields, one field will invariably be more developed than the other, and so there is a scientific opportunity. This is how physicists go hunting. The Internet has become a better tool than the old paper scientific literature, because it responds in real time.

To see why this is a big deal for me, consider the following "homework hack". You want to become an instant expert in something that matters to you: maybe a homework assignment, maybe researching a life-threatening disease afflicting someone close to you. You can research it on the Internet using a search engine… but as you know, you can search, but you can't really find. Google gives you unstructured information, but for a young person in a hurry, that is simply not good enough. Search engines are linear, predictable and essentially an uninteresting way to use the Internet.

Instead, try the following hack. Step 1: Make a Wiki page on the topic. Step 2: fill it with complete nonsense. Step 3: Wait a few days. Step 4: Visit the Wiki page, and harvest the results of what generous and anonymous souls from â€" well, who cares where they are from or who they are? â€" have corrected, contributed and enhanced in, one presumes, fits of righteous indignation. It really works. I know, because I have seen both sides of this transaction. There you have it: the emergence of a truly global, collective entity, something that has arisen from humans + Internet. It talks back.

This "homework hack" is, in reality, little more than the usual pattern of academic discourse, but carried out, in William Gibson's memorable phrase, with "one thumb permanently on the fast-forward button". Speed matters, because life is short. The next generation of professional thinkers already have all the right instincts about the infinite library that is their external mind, accessible in real time, and capable of accelerating the already Lamarckian process of evolution in thought and knowledge on timescales that really matter. I'm starting to get it too.

Roughly three billion years ago, microbial life invented the Internet and Lamarckian evolution. For them, the information is stored in molecules, and is recorded in genes that are transmitted between consenting microbes by a variety of mechanisms that we are still uncovering. Want to know how to become a more virulent microbial pathogen? Download the gene! Want to know how to hotwire a motorcycle? Go to the Website! So much quicker than random trial-and-error evolution, and it works … right now! And your children's always-on community of friends, texting "lol"s and other quick messages that really say "I'm here, I'm your friend, let's have a party" is no different than the quorum sensing of microbes, counting their numbers so that they can do something collectively, such as invade a host or grow a fruiting body from a biofilm.

I'm starting to think like the Internet, starting to think like biology. My thinking is better, faster, cheaper and more evolvable because of the Internet. And so is yours. You just don't know it yet.

david_m_eagleman's picture

Neuroscientist, Stanford University; Author, Incognito, Sum, The Brain

The Internet has changed the way I think about our threats for societal collapse. When we learn of the empires that have tumbled before us, it is plausible to think that our civilization will adhere to the same path and eventually fall to a traditional malady â€" anything from epidemics to resource depletion. But the rapid advance of the Internet has thoroughly (and happily) changed my opinion about our customary existential threats. Here are six ways that I think the possession of a rapid and vast communication network will make us much luckier than our predecessors:

1. Disease Epidemics

One of our more dire prospects for collapse is an infectious disease epidemic. Bacterial or viral epidemics precipitated the fall of the Golden Age of Athens, the Roman Empire, and most of the empires of the Native Americans. The Internet can be our key to survival, because the ability to work telepresently can inhibit microbial transmission by reducing human-to-human contact. In the face of an otherwise devastating epidemic, businesses can keep supply chains running with the maximum number of employees working from home. This won't keep everyone off the streets, but it can reduce host density below the tipping point. If we are well-prepared when an epidemic arrives, we can fluidly shift into a self-quarantined society in which microbes fail due to host sparseness. Whatever the social ills of isolation, they bode worse for the microbes than for us.

2. Availability of Knowledge

Important discoveries have historically stayed local. Consider smallpox inoculation: this practice was underway in India, China and Africa for at least one hundred years before it made its way to Europe. By the time the idea reached North America, the native civilizations had long collapsed.

And information is not only hard to share, it's hard to keep alive. Collections of learning â€" from the Library at Alexandria to the Mayan corpus â€" have fallen to the bonfires of invaders or the winds of natural disasters. Knowledge is hard won but easily lost.

The Internet addresses the problem of knowledge-sharing better than any technology we've had. New discoveries latch on immediately: the information spreads widely and the redundancy prevents erasure. In this way, societies can optimally ratchet up, using the latest bricks of knowledge in their fortification against existential threats.

3. Speed by Decentralization

We are witnessing the downfall of slow central control in the media: news stories are increasingly becoming user-generated Nets of dynamically updated information. During the recent California wildfires, locals went to the TV stations to learn whether their neighborhoods were in danger. But the news stations appeared most concerned with the fate of celebrity mansions, so Californians changed their tack: they posted tweets, uploaded geotagged cell phone pics, and updated Facebook. And the balance tipped: the Internet carried the news more quickly and accurately than any news station could. In this decentralized regime, there were embedded reporters on every neighborhood block, and the news shockwave kept ahead of the firefront. In the right circumstances, this headstart could provide the extra hours that save us.

4. Minimization of censorship

Political censorship has been a familiar specter in the last century, with state-approved news outlets ruling the press, airwaves, and copying machines in the former USSR, Romania, Cuba, China, Iraq, and other countries. In all these cases, censorship hobbled the society and fomented revolutions. Historically, a more successful strategy has been to confront free speech with free speech â€" and the Internet allows this in a natural way. It democratizes the flow of information by offering access to the newspapers of the world, the photographers of every nation, the bloggers of every political stripe. Some postings are full of doctoring and dishonesty while others strive for independence and impartiality â€" but all are available for the end-user to sift through for reasoned consideration.

5. Democratization of Education

Most of the world does not have access to the education afforded to a small minority. For every Albert Einstein, Yo-Yo Ma or Barack Obama who has the opportunity for education, there are uncountable others who never get the chance. This vast squandering of talent translates directly into reduced economic output. In a world where economic meltdown is often tied to collapse, societies are well-advised to leverage all the human capital they have.

The Internet opens the gates of education to anyone who can get her hands on a computer. This is not always a trivial task, but the mere feasibility re-defines the playing field. A motivated teen anywhere on the planet can walk through the world's knowledge â€" from the Webs of Wikipedia to the curriculum of MIT's Open Course Ware.

6. Energy Savings

It is sometimes argued that societal collapse can be cast in terms of energy: when energy expenditure begins to outweigh energy return, collapse ensues. The Internet addresses the energy problem with a kind of natural ease. Consider the massive energy savings inherent in the shift from snail-mail to email. As recently as last decade, information amassed not in gigabytes but in cubic meters of filing cabinets. Beyond convenience, it may be that the technological shift from paper to electrons is critical to the future. Of course, there are energy costs to the banks of computers that underpin the Internet â€" but these costs are far less than the forests and coal beds and oil deposits that would be spent for the same quantity of information flow.

The tangle of events that trigger societal collapse can be complex, and there are several existential threats the Internet does not address. Nonetheless, it appears that vast, networked communication can serve as an antidote to several of the most common and fatal diseases of civilization. Almost by accident, we now command the capacity for self-quarantining, retaining knowledge, speeding information flow, reducing censorship, actualizing human capital, and saving energy resources. So the next time a co-worker laments about Internet addiction, the banality of tweets, or the decline of face-to-face conversation, I will sanguinely suggest that the Internet â€" even with all its flashy wastefulness â€" may just be the technology that saves us.

donald_d_hoffman's picture

Cognitive Scientist, UC, Irvine; Author, The Case Against Reality

Human thought has many sculptors, and each wields special tools for distinct effects. Is the Internet in the tool kit? That depends on the sculptor.

Natural selection sculpts human thought across generations and at geologic time scales. Fitness is its tool, and human nature, our shared endowment as members of a species, is among its key effects. Although the thought life of each person is unique, one can discern patterns of thought that transcend racial, cultural and occupational differences; similarly, although the face of each person is unique, one can discern patterns of physiognomy â€" two eyes above a nose above a mouth â€" that transcend individual differences.

Is the Internet in the tool kit of natural selection? That is, does the Internet alter our fitness as a species? Does it change how likely we are to survive and reproduce? Debate on this question is in order, but the burden is surely on those who argue no. Our inventions in the past have altered our fitness: arrow heads, agriculture, the control of fire. The Internet has likely done the same.

But has the Internet changed the patterns of thought that transcend individual differences? Not yet. Natural selection acts over generations; the Internet is but one generation old. The Internet is in the tool kit, but has not yet been applied. Over time, as the Internet rewards certain cognitive skills and ignores or discourages others, it could profoundly alter even the basic patterns of thought that we share as a species. The catch, however, is "over time." The Internet will evolve new offspring more quickly thanHomo sapiens and they, rather than the Internet, will alter human nature. These offspring will probably no more resemble the Internet than Homo sapiensÂresembles amoebas.

Learning sculpts human thought across the lifetime of an individual. Experience is its tool, and unique patterns of cognition, emotion and physiology are its key effects. Marcel Just and Timothy Keller found that poor readers in elementary school can dramatically improve their skills with six months of intensive training, and that white matter connections in the left hemispheres of their brains are measurably increased in the process.

There are, of course, endogenous limits to what can be learned, and these limits are largely a consequence of mutation and natural selection. A normal infant exposed to English will learn to speak English, but the same infant exposed to C++ or HTML will learn little.

Is the Internet in the tool kit of learning? No doubt. Within the endogenous limits of learning set by one's genetic inheritance, exposure to the Internet can alter how one thinks no less than can exposure to language, literature or mathematics. But the endogenous limits are critical. Multi-tasking, for instance, might be a useful skill for exploiting in parallel the varied resources of the Internet, but genuine multi-tasking, at present, probably exceeds the limitations of the attentional system of Homo sapiens. Over generations, this limitation might ease. What the Internet cannot accomplish as a tool of learning, it might eventually accomplish as a tool of natural selection.

Epigenetics sculpts human thought within a lifetime and across a few generations. Experience and environment are its guides and shifts in gene expression that trigger shifts in cognition, emotion and physiology are its relevant effects. Oberlander and colleagues found that a mother's anxiety can change the expression of the NR3C1 gene in her child, leading to the child's increased reactivity to stress. Childhood abuse can similarly lead to persistent feelings of anxiety and acute stress in a child, fundamentally altering its thought life.

Is the Internet in the toolkit of epigenetics? Possibly, but no one knows. The field of epigenetics is young, and even the basic mechanisms by which transgenerational epigenetic effects are inherited are not well understood. But the finding that parental behavior can alter gene expression and thought life in a child certainly leaves open the possibility that other behavioral environments, including the Internet, can do the same.

Thus, in sum, the relevance of the Internet to human thought depends on whether one evaluates this relevance phylogenetically, ontogenetically or epigenetically. Debate on this issue can be clarified by specifying the framework of evaluation.

w_tecumseh_fitch's picture

Professor of Cognitive Biology, University of Vienna; Author, The Evolution of Language

When I consider the effect of the Internet on my thought, I keep coming back to the same metaphor. What makes the Internet fundamentally new is the many-to-many topology of connections it allows: suddenly any two Internet-equipped humans can transfer essentially any information, flexibly and efficiently. We can transfer words, code, equations, music or video anytime to anyone, essentially for free. We are no longer dependent on publishers or media producers to connect us. This parallels what happened, in animal evolution, as we evolved complex brains controlling our behavior, partially displacing the basically hormonal, one-to-many systems that came before. So let's consider this new information topology from the long evolutionary viewpoint, by comparing it to the information revolution that occurred during animal evolution over the last half-billion years: the evolution of brains.

Our planet has been around for 4.5 billion years, and life appeared very early, almost 4 billion years ago. But for three quarters of the subsequent period, life was exclusively unicellular, similar to today's bacteria, yeast or amoebae. The most profound organic revolution, after life itself, was thus the transition to complex multicellular organisms like trees, mushrooms and ourselves.

Consider this transition from the viewpoint of a single-celled organism. An amoeba is a self-sufficient entity, moving, sensing, feeding and reproducing independent of other cells. For three billion years of evolution, our ancestors were all free-living cells like this, independently "doing it for themselves," and were honed by this long period into tiny organisms more versatile and competent than any cell in our multicellular bodies. Were it capable of scorn, an amoeba would surely scoff at a red blood cell as little more than a stupid bag of protoplasm, barely alive, over-domesticated by the tyranny of multicellular specialization.

Nonetheless, being jacks of all trades, such cells were masters of none. Cooperative multicellularity allowed cells to specialize, mastering the individual tasks of support, feeding, and reproduction. Specialization and division of labor allowed teams of cells to vastly outclass their single-celled ancestors in terms of size, efficiency, and complexity, leading to a whole new class of organisms. But this new organization created its own problems of communication: how to ensure smooth, effective cooperation among all of these independent cells? This quandary directly parallels the origin of societies of specialized humans.

Our bodies have essentially two ways of solving the organizational problems raised by coordinating billions of semi-independent cells. In hormonal systems, master control cells broadcast potent signals all other cells must obey. Steroid hormones like estrogen or testosterone enter the body's cells, penetrating their nuclei and directly controlling gene expression. The endocrine system is like an immensely powerful dictatorship, issuing sweeping edicts that all must obey.

The other approach involved a novel cell type specialized for information processing: the neuron. While the endocrine approach works fine for plants and fungi, metazoans (multicellular animals) move, sense and act, requiring a more subtle neural form of control. From the beginning, neurons were organized into networks: they are teamworkers collaboratively processing information and reaching group decisions. Only neurons at the final output stage, like motor neuron, retain direct power over the body. And even motor neurons must act together to produce coordinated movement rather than uncontrolled twitching.

In humans, language provided the beginnings of a communicative organizational system, unifying individuals into larger, organized collectives. Although all animals communicate, their channels are typically narrow and do not support expression of any and all thoughts. Language enables humans to move arbitrary thoughts from one mind to another, creating a new, cultural level of group organization. For most of human evolution, this system was very local, allowing small bands of people to form local clusters of organization. Spoken language allowed hunter-gatherers to organize their foraging efforts, or small farming communities their harvest, but not much more.

The origin of writing allowed the first large-scale societies, organized on hierarchical (often despotic) lines: a few powerful kings and scribes had control over the communication channels, and issued edicts to all. This one-to-many topology is essentially endocrine. Despite their technological sophistication, radio and television share this topology. The proclamations and legal decisions of the ruler (or television producer) parallel the reproductive edicts carried by hormones within our bodies: commands issued to all, which all must obey.

Since Gutenberg, human society has slowly groped its way towards a new organizational principle. Literacy, mail, telegraphs and democracy were steps along the way to a new organizational metaphor, more like the nervous system than hormones. The Internet completes the process: now arbitrarily far-flung individuals can link, share information, and base their decisions upon this new shared source of meaning. Like individual neurons in our neocortex, each human can potentially influence and be influenced, rapidly, by information from anyone, anywhere. We, the metaphoric neurons of the global brain, are on the brink of a wholly new system of societal organization, one spanning the globe with the metaphoric axons of the Internet linking us together.

The protocols are already essentially in place. TCP/IP and HTML are the global brain equivalents of cAMP and neurotransmitters: universal protocols for information transfer. Soon a few dominant languages like English, Chinese and Spanish will provide for universal information exchange. Well-connected collective entities like Google and Wikipedia will play the role of brainstem nuclei to which all other information nexuses must adapt.

Two main problems mar this "global brain" metaphor. First, the current global brain is only tenuously linked to the organs of international power. Political, economic and military power remains insulated from the global brain, and powerful individuals can be expected to cling tightly to the endocrine model of control and information exchange. Second, our nervous systems evolved over 400 million years of natural selection, during which billions of competing false-starts and miswired individuals were ruthlessly weeded out. But there is only one global brain today, and no trial and error process to extract a functional configuration from the trillions of possible configurations. This formidable design task is left up to us.

alison_gopnik's picture

Psychologist, UC, Berkeley; Author, The Gardener and the Carpenter

My thinking has certainly been transformed in alarming ways by a relatively recent information technology, but it's not the Internet. I often sit for hours in the grip of this compelling medium, motionless and oblivious, instead of interacting with the people around me. As I walk through the streets I compulsively check out even trivial messages â€" movie ads, street signs â€" and I pay more attention to descriptions of the world â€" museum captions, menus â€" than to the world itself. I've become incapable of using attention and memory in ways that previous generations took for granted. Yes, I know reading has given me a powerful new source of information. But is it worth the isolation, the damage to dialog and memorization that Socrates foresaw? Studies show, in fact, that I've become involuntarily compelled to read, I literally can't keep myself from decoding letters. Reading has even reshaped my brain, cortical areas that once were devoted to vision and speech have been hijacked by print. Instead of learning through practice and apprenticeship, I've become dependent on lectures and textbooks. And look at the toll of dyslexia and attention disorders and learning disabilities, all signs that our brains were just not designed to deal with such a profoundly unnatural technology.

Like many others I feel that the Internet has made my experience more fragmented, splintered and discontinuous. But I'd argue that's not because of the Internet itself but because I have mastered the Internet as an adult. Why don't we feel the same way about reading and schooling that we feel about the Web? These changes in the way we get information have had a pervasive and transformative effect on human cognition and thought, and universal literacy and education have only been around for a hundred years or so.

It's because human change takes place across generations, rather than within a single life. This is built into the very nature of the developing mind and brain. All the authors of these essays have learned how to use the Web with brains that were fully developed long before we sent our first e-mail. All of us learned to read with the open and flexible brains we had when we were children. As a result no-one living now will experience the digital world in the spontaneous and unselfconscious way that the children of 2010 will experience it, or in the spontaneous and unselfconscious way we experience print.

There is a profound difference between the way children and adults learn. Young brains are capable of much more extensive change â€" more rewiring â€" than the brains of adults. This difference between old brains and young ones is the engine of technological and cultural innovation. Human adults, more than any other animal, reshape the world around them. But adults innovate slowly, intentionally, and consciously. The changes that take place within an adult life, like the development of the Internet, are disruptive, attention-getting, disturbing or exciting. But those changes become second nature to the next generation of children. Those young brains painlessly absorb the world their parents created, and that world takes on a glow of timelessness and eternity, even if it was only created the day before you were born.

My experience of the Web, feels fragmented, discontinuous, effortful (and interesting!) because, for adults, learning a new technology depends on conscious, attentive, intentional processing. In adults, this kind of conscious attention is a very limited resource. This is even true at the neural level. When we pay attention to something, the prefrontal cortex, the part of our brain responsible for conscious goal-directed planning, controls the release of cholinergic transmitters, chemicals that help us learn, to certain very specific parts of the brain. So as we wrestle with a new technology we adults can only change our minds a little bit at a time.

Attention and learning work very differently in young brains. Young animals have much more wide-spread cholinergic transmitters than adults and their ability to learn doesn't depend on planned, deliberate attention. Young brains are designed to learn from everything new, or surprising or information-rich, even when it isn't particularly relevant or useful.

So children who grow up with the Web will master it in a way that will feel as whole and natural as reading feels to us. But that doesn't mean that their experience and attention won't be changed by the Internet, anymore than my print-soaked twentieth century life was the same as the life of a barely literate 19th century farmer.

The special attentional strategies that we require for literacy and schooling may feel natural since they are so pervasive, and since we learned them at such an early age. But at different times and places, different ways of deploying attention have been equally valuable and felt equally natural. Children in Mayan Indian cultures, for example, are taught to distribute their attention to several events simultaneously, just as print and school teach us to focus on just one thing at a time. I'll never be able to deploy the broad yet vigilant attention of a hunter-gatherer, though, luckily, a childhood full of practice caregiving let me master the equally ancient art of attending to work and babies at the same time.

Perhaps our digital grandchildren will view a master reader with the same nostalgic awe that we now accord to a master hunter or an even more masterly mother of six. The skills of the hyper-literate 20th century may well disappear, or at least become highly specialized enthusiasms, like the once universal skills of hunting, poetry and dance. It is sad that after the intimacy of infancy our children inevitably end up being somewhat weird and incomprehensible visitors from the technological future. But the hopeful thought is that my grand-children will not have the fragmented, distracted, alienated digital experience that I do. For them the Internet will feel as fundamental, as rooted, as timeless, as a battered Penguin paperback, that apex of the literate civilization of the last century, feels for me.

olafur_eliasson's picture

The dimensionality of the Internet has yet to be defined, and the principles outlining its space are constantly negotiated through our use of it. With its unique time-/space situation — the fact that it is possible to physically be in one place, and, simultaneously, have access to the entire world —the Internet can potentially have a huge impact on our understanding of our surroundings.

Ideally, the relation between user and network should one of mutual exchange: I co-produce the network through my involvement in it, and it co-produces me through the information, I get from it. But for this to happen, we have to make better use of the potentials of the Internet, and the Internet has to have an interest in this mutual exchange — it has to invest itself in its users, so to speak. In its current form, the Internet, the way I see it, has signed a contract with a Modernist, two-dimensional conception of space. The relation between it and its users is one of subject and object: I can see it as if it were an image, but I cannot feel it, I'm not present in it, the interaction between the medium and I is too weak.

Being a profoundly democratic medium, opening up unprecedented possibilities of self-expression, freedom of the press and access to information, the Internet is not only the source of unlimited access to knowledge, but paradoxically enough also the breeding ground of a general acceptance of a lack of competences. Large social communities such as Facebook, which do not produce or exchange any kind of knowledge, seem to flourish, and because search machines are based on trivial algorithmic principles of recognition, it can be hard to find the qualified, critical voices in the bulk of information.

If the Internet should help us become more consciously involved with the world, it is not enough to just canalise huge amounts of information into society. Search engines should be competence-focused, social networks should relate to competent search engines, and video and search functions should be better integrated. This requires that Google, Yahoo, AOL and the other large companies defining the future of the Internet, provide the medium with enough confidence to operate with self-criticism. The only self-criticism, the Internet is operating with at the moment seems to be the one of the market economy â€" the most efficient, frequently updated and trimmed sites being the ones where money is changing hands. This is not enough. We have to base our use of the Internet on both trust and scepticism.

In this way, the Internet would not stand outside reality and send information in, rather it would be conceived of as a part of reality, and thus the distinction between subject and object would dissolve, and we would experience the Internet as if it were a three-dimensional space. The Internet would become a reality producing machine.

bruce_hood's picture

Chair of Developmental Psychology in Society, University of Bristol; Author, The Self-Illusion, Founder of Speakezee

Who has not Googled thyself? Most humans have a concept of self that is constructed in terms of how we think we are perceived by those around us and the Internet has made that preoccupation trivially easy. Now anyone can assess their impact factor through a multitude of platforms including Facebook, Twitter and of course, blogging.

Last year, on the request of my publisher, I started a blog to comment on weird and bizarre examples of supernatural thinking from around the world. From the outset I thought that blogging was a self-indulgent activity but I agreed to give it a whirl to help promote my book. In spite of my initial reluctance I very soon became addicted to feedback. It was not enough to post blogs for some unseen audience. I needed the validation from visitors that my efforts and opinions were appreciated. Within weeks, I had become a numbers junkie looking for more and more hits.

However, the Internet has also made me sentient of my own insignificance and power at the same time. Within the blogosphere, I am no longer an expert on any opinion as it is one that can be shared or rejected by multitude of others. But insignificant individuals can make a significant difference when they coalesce around a cause. As this goes to press, a British company is under public scrutiny for allegedly selling bogus bomb-detecting dowsing rods to the Iraqi security forces. This has come about because of a blog campaign by like-minded skeptics who have used the Internet to draw attention to what they consider to be questionable business activity. This would have been very difficult and daunting in the pre-Internet days and not something that the ordinary man in street would have taken on. In this way, the Internet can empower the individual through collective campaigns.

I can make a difference because of the Internet. I'll be checking back on Google to see if anyone shares my opinion.

richard_foreman's picture

Playwright & Director; Founder, The Ontological-Hysteric Theater

How is the Internet changing the way I think? But what is it — this doing "thinking" that I assume I do along with everybody else? Probably there is no agreement about what this "thinking" consists of. But I certainly do not believe "gathering information" is thinking — and that has obviously been an activity that has expanded and sped up as a result of the Internet. But for me —" to "think" is to withdraw from gathered information into a blankness within which something arises — pops out— is born.

Of course it will be maintained that what "pops" out may have its roots, may be conditioned, by many factors in my experiential past. But nevertheless —while the Internet swamps us in "connectedness" and "fact" — it is only in the withdrawal from those I claim a space for thinking.

So in one sense, the Internet expands the arena within which thinking may resonate, and so perhaps the thinking is thereby "attuned" somewhat differently. But I must admit to being one of those who believes that while it is clearly "life-changing" — it is no way, if you will — "soul-changing" Accessing the ever expanding, ever faster Internet means a life that is changing as it becomes the life of a surfer (just as life might change if one moved to a California beach community) — one becomes more and more agile balancing on top of the flow, leaping from hyper-link to hyper-link — giving one's mental "environment" a certain shape based on those chosen jumps.

But the Internet sweeps you away from where and "WHAT" you were — so instead of filling you with the fire to dig deeper into the magic bottomless source that is the self — it lets you drift into the dazed state of having everything at your finger-tips — which are used to caress the world of course, but only the world as it assumes the shape of the now-manifest rather than the world of the still un-imaginable.

So even though I myself do spend LOTS of time on the Internet — (fallen, "Pancake Person" that I am) I can't help being reminded of the Greek philosopher who attributed his long life to avoiding dinner parties. (If only I could avoid the equally distracting Internet which, in it's promise of connectedness and expanded knowledge is really a substitute social phenomenon).

The "entire world" that the Internet seems to offer harmonized strangely with the apple offered to Eve from the Tree of Knowledge — ah, we don't believe in those old myths? (I guess one company guru did).

Well, the only hope I see hovering in the never-never land (now real) where the Internet does it's work of feeding smart people amphetamines and "dumb" people tranquilizers —  the only hope is that the expanding puddle of boiling, bubbling hot milk will eventually COAGULATE and a new unforeseen pattern will emerge out of all that activity that thought it was aiming at a certain goal. but (as is usual with life) was really headed someplace else nobody knew about.

That makes it sound like the new mysticism for a new Dark Ages. Well, we've already bitten the Apple. Good luck to those much younger than me who may be around to see either the new Heaven or the new Hell.

joshua_d_greene's picture

Cognitive Neuroscientist and Philosopher, Harvard University

Have you ever read a great book from before the mid 1990s and thought to yourself, "My Goodness! These ideas are so primitive! So… pre-Internet!" Me neither. The Internet hasn't changed the way we think anymore than the microwave oven has changed the way we digest food. The Internet has provided us with unprecedented access to information, but it hasn't changed what we do with it once it's made it into our heads. This is because the Internet doesn't (yet) know how to think. We still have to do it for ourselves, and we do it the old-fashioned way.

One of the Internet's early disappointments was the now defunct Website "Ask Jeeves." (It was succeeded by Ask.com, which dropped Jeeves in 2006) Jeeves appeared as a highly competent infobutler who can understand and answer questions posed in natural language. ("How was the East Asian economy affected by the Latin American debt crisis?" "Why do fools fall in love?") Anyone who spent more than a few minutes querying Jeeves quickly learned that Jeeves himself didn't understand squat. Jeeves was just a search engine like the rest, mindlessly matching the words contained in your question to words found on the Internet. The best Jeeves could do with your profound questionâ€"the best any search engine can do todayâ€"is direct you to the thoughts of another human being who has already attempted to answer a question related to yours. This is not to say that cultural artifacts can't change the way we think.

Jim Flynn has documented massive gains in IQ over the 20th Century (the "Flynn Effect"), which he attributes to our enhanced capacity for abstract thought, which he in turn attributes to the cognitive demands of the modern marketplace. Why hasn't the Internet had a comparable effect? The answer, I think, is that the roles of master and servant are reversed. We place demands on the Internet, but the Internet hasn't placed any fundamentally new demands on us. In this sense, the Internet really is like a butler. It gives us the things that we want faster and with less effort, but it doesn't give us anything that we couldn't otherwise get for ourselves and doesn't require us to do anything more than give comprehensible orders.

Someday we'll have a nuts-and-bolts understanding of complex abstract thought, which will enable us to build machines that can do it for us, and perhaps do it better than we do, and perhaps teach us a thing or two about it. But until then, the Internet will continue to be nothing more, and nothing less, than a very useful, and very dumb, butler.

olafur_eliasson's picture

I notice that some radical social experiments which would have seemed Utopian to even the most idealistic anarchist 50 years ago are now working smoothly and without much fuss. Among these are open source development, shareware and freeware,ÂWikipedia, MoveOn, and UK Citizens Online Democracy.

I notice that the Net didn't free the world in quite the way we expected â€" repressive regimes can shut it down, and liberal ones can use it as a propaganda tool. On the upside, I notice that the variable trustworthiness of the Net has made people more sceptical about the information they get from all other media.

I notice that I now digest my knowledge as a patchwork drawn from a wider range of sources than I used to. I notice too that I am less inclined to look for joined-up finished narratives and more inclined to make my own collage from what I can find. I notice that I read books more cursorily â€" scanning them in the same way that I scan the Net â€" 'bookmarking' them.

I notice that the turn-of-the-century dream of Professor Darryl Macer to make a map of all the world's concepts is coming true autonomously â€" in the form of the Net.

I notice that I correspond with more people but at less depth. I notice that it is possible to have intimate relationships that exist only on the Net â€" that have little or no physical component. I notice that it is even possible to engage in complex social projects â€" such as making music â€" without ever meeting your collaborators. I am unconvinced of the value of these.

I notice that the idea of 'community' has changed â€" whereas that term used to connote some sort of physical and geographical connectedness between people, it can now mean 'the exercise of any shared interest'. I notice that I now belong to hundreds of communities â€" the community of people interested in active democracy, the community of people interested in synthesizers, in climate change, in Tommy Cooper jokes, in copyright law, in acapella singing, in loudspeakers, in pragmatist philosophy, in evolution theory, and so on.

I notice that the desire for community is sufficiently strong for millions of people to belong to entirely fictional communities such asÂSecond Life andWorld of Warcraft. I worry that this may be at the expense of First Life.

I notice that more of my time is spent in words and language â€" because that is the currency of the Net â€" than it was before. My notebooks take longer to fill. I notice that I mourn the passing of the fax machine, a more personal communication tool than email because it allowed the use of drawing and handwriting. I notice that my mind has reset to being primarily linguistic rather than, for example, visual.

I notice that the idea of 'expert' has changed. An expert used to be 'somebody with access to special information'. Now, since so much information is equally available to everyone, the idea of 'expert' becomes 'somebody with a better way of interpreting'. Judgement has replaced access.

I notice that I have become a slave to connectedness â€" that I check my email several times a day, that I worry about the heap of unsolicited and unanswered mail in my inbox. I notice that I find it hard to get a whole morning of uninterrupted thinking. I notice that I am expected to answer emails immediately, and that it is difficult not to. I notice that as a result I am more impulsive.

I notice that I more often give money in response to appeals made on the Net. I notice that 'memes' can now spread like virulent infections through the vector of the Net, and that this isn't always good.

I notice that I sometimes sign petitions about things I don't really understand because it is easy. I assume that this kind of irresponsibility is widespread.

I notice that everything the Net displaces reappears somewhere else in a modified form. For example, musicians used to tour to promote their records, but, since records stopped making much money due to illegal downloads, they now make records to promote their tours. Bookstores with staff who know about books and record stores with staff who know about music are becoming more common.

I notice that, as the Net provides free or cheap versions of things, 'the authentic experience' â€" the singular experience enjoyed without mediation â€" becomes more valuable. I notice that more attention is given by creators to the aspects of their work that can't be duplicated. The 'authentic' has replaced the reproducible.

I notice that almost all of us haven't thought about the chaos that would ensue if the Net collapsed.

I notice that my daily life has been changed more by my mobile phone than by the Internet.

nick_isaac's picture

Macroecologist, Centre for Ecology & Hydrology, Biological Research Centre (BRC), Oxfordshire

"Standing on giant's shoulders" is a common metaphor for scientific progress. In order to be a scientist, one must first climb the body of the giant, i.e. the accumulated knowledge of previous generations. Reading the published work of other scientists is therefore the most fundamental activity that we perform as academics. The Internet is changing not just the way we use the giant, but also how the giant grows with the accretion of new knowledge.

There are two ways in which scientists learn about relevant literature. One is to browse new publications, another is when they get cited by other papers. The former is more common in fast moving fields like medicine and physics, but the second is widespread in my own field of ecology, where the longevity of most research papers (judged by the half-life of citation decay) is in excess of a decade. The Internet has far-reaching consequences for both modes of knowledge acquisition.

Reading new publications has been revolutionised by services that alert us via email whenever new papers are published in a defined topic area. This means it's no longer necessary to spend time in the library looking though tables of contents (TOC). Although this has obvious benefits in efficiency, there is a cost in terms of the breadth of articles we are likely to consume. In the old days, one would glance at all the titles and perhaps most of the abstracts in a particular journal issue. For example, the current issue of the journal Ecology contains articles on bacteria, plants, insects, fish and birds, covering a wide range of research topics, both theoretical and empirical. Electronic TOC alerts mean that most researchers encounter only articles in their own area of specialism and are therefore much less likely to come across new and potentially transformative ideas. There is a paradox here: the Internet offers the potential to access the full spectrum of research papers, but actually results in a narrowing of focus. The same phenomenon has been observed in online social networks, which are no more socially and ethnically heterogeneous than real ones.

The Internet revolution has equally profound consequences for the second mode of knowledge acquisition. In the old days, I would read an article from start to finish and make a list of relevant citations to fetch from the library. Nowadays, the ubiquity of electronic articles in portable document format (PDFs) means I can get the cited article on screen in just a few clicks. There's no longer any need to move from my desk, or even to finish one article before going on to the next. Often when reading a PDF, I simply scan the text in search of a key assertion or statement. This changes the very nature of scientific publications and the way they are used. Articles become known through citation for a single contribution to knowledge: either a new method or a surprising result, but never both.

The changes to scientists' reading habits due to the Internet are similar to the distinction between grazing and browsing animals. Grazers like cattle consume grass in bulk during intensive feeding bouts. Most grass is not especially nutritious and is regurgitated later as the animals sit reflectively and chew the cud. Bulk feeding and rumination means that cattle are large and ungainly creatures. By contrast, browsers like deer are much more picky in the plants they eat and select only the greenest shoots. This means that deer consume smaller quantities of food than cattle, but are constantly on the move and spend much less time at rest. Thus, the modern Internet-era scientist may be mentally nimble as the deer is physically nimble, but lacks time for cattle-like rumination.

The Internet has undoubtedly brought great benefits to us all. At the same time, the Internet make us more specialised and compartmentalised in the kinds of knowledge we access and absorb. This is a problem is an age where interdisciplinary solutions are required to solve the complex and sometimes conflicting problems of climate change, poverty, disease and biodiversity loss. In this setting, the role of informal fora for cross-disciplinary engagement becomes even more important. Here it's harder to see the Internet as a solution because the chat room can never provide the chance encounters, nor replicate the convivial cosiness, of an old-fashioned low-tech coffee room.

fabrizio_gallanti's picture

Senior Fellow, Princeton University School of Architecture

I believe in the concept of the haptic nervous system, where the brain and neuronal cells are distributed along the nerve fibres of the whole body, not just resident in the skull. I therefore believe that body and brain are connected and that learning is also a physical phenomena.

I know how Internet has changed my body, not really how it changed my way of thinking.

My short sight has remained fairly stable: actually reading from a screen is forcing neither the retina nor the muscles of my eyes. Therefore I could avoid recurring to laser therapy so as to correct retinal tension, as it happened to me in the early nineties. In that period I used to study architecture and drawing by hand meant a great stress to my eyes, almost causing holes and retinal detachment.

Due to the position in front of the screen of the computer and to the lack of physical exercise deriving by a too intense use of the Internet (to every advance in connection speed more hours of it) I developed two herniated discs in the cervical region (detected in 2005) and two herniated discs in the lumbar region (detected in 2008). The first two provoke numbness and a certain diminution of strength in my thumbs, while the two last ones determine sciatica pains in my right leg, which is variable but aggravated by the position used to navigate the Internet for long hours. So it hurts more in the weekdays. There was anyway a family history of hernias.

The numbness of the thumbs, a disorder deriving from the compression of the spinal nerves in my neck, is aggravated by the use of portable devices from which to access the web, where the thumbs are the main fingers to be used, so that muscular fatigue is a secondary factor of stress. iPhones should carry some disclaimers about that.

On the other hand the information provided by the Internet and then stored in lightweight portable devices such as pen drives of external hard-disks save me from carrying heavy books around, therefore protecting my back. I can also shop online waiting for the goods to be delivered at my door. These were the main changes, registered so far.

The Internet also offers me with an instant and fast set of information about the pathologies that I know I suffer from and the new symptoms that arise suddenly, thus sustaining a mild form of hypochondria. It seems ironical that due to the easiness of this information, rather than thinking more of the world outside me, I tend to think more about myself and how I feel and what this could mean (not always, but quite frequently): I surf the website of some obscure osteopath in Nebraska to then come back at my petty little problems.

So I would say that at least Internet made me a more informed patient. But I am not sure if that knowledge is really valuable: the paediatrician of my daughters forbid me to check online about the illnesses they might be suffering, as my inclination to self-learning tends not to regard only myself but all my family and as the grim perspective that I tend to imagine can be very wrong. I wonder if the difficulty of getting information before the Internet was not somehow protecting us from a new diffused expertise as the one of Bouvard and Pecuchet.

haim_harari's picture

Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm

It is entirely possible that the Internet is changing our way of thinking in more ways than I am willing to admit, but there are three clear changes that are palpable:

The first is the increasing brevity of messages.

Between Twittering, chatting and sending abbreviated Blackberry e-mails, the "old" sixty-second sound byte of TV newscasts is now converted into one-liners, attempting to describe ideas, principles, events, complex situations and moral positions.

Even when the message itself is somewhat longer, the fact that we are exposed to more messages, than ever before, means that the attention "dose" allocated to each item is tiny. The result, for the general public, is a flourishing of extremist views on everything. Not only in politics, where only the ideas of the lunatic far left and the crazy far right can be stated in one sentence, but also in matters of science.

It is easy to state in one sentence nonsense such as "the theory of evolution is wrong", "global warming is a legend", "immunization causes Autism" and "God (mine, yours, or hers) has all the answers". It requires long essays to explain and discuss the "ifs" and "buts" of real science and of real life.

I, personally, find that this trend makes me a fanatic anti-extremist. I am boiling mad whenever I see or read such telegraphic (to use an ancient terminology) elaborations of ideas and facts, knowing that they are so wrong and misleading, and, at the same time, they find their ways into so many hearts and minds. Even worse, people who are still interested in a deeper analysis and a balanced view of topics, whether scientific, social, political or other, are considered leftovers from an earlier generation, and are labeled as extremists of the opposite color, by the fanatics of one corner or another.

The second change is the diminishing role of factual knowledge, in the thinking process.

The thought pattern of different people, on different subjects, requires varying mixtures of knowing facts, being able to correlate them, creating new ideas, distinguishing between important and secondary matters, knowing when to prefer pure logic and when to let common sense dominate, analyzing processes and numerous other components of a complex mental exercise.

The Internet allows us to know fewer facts, being sure that they are always literally at our fingertips, thus reducing their importance as a component of the thought process. This is similar to, but much more profound than, the reduced role of pure computation and simple arithmetic with the introduction of calculators.

But we should not forget that, often, in the scientific discovery process, the greatest challenge is to ask the right question, rather than answer a well posed question, and to correlate facts that no one thought of connecting. The existence of many available facts, somewhere in the infinite ocean of the Internet, is no help in such an endeavor. I find, personally, that my scientific thinking is changed very little by the availability of all of these facts, but my attitude towards social, economic and political issues is enriched by having many more facts at my disposal.

An important warning is necessary here: A crucial enhanced element of the thought process, demanded by the flood of available facts, must be the ability to evaluate the credibility of "facts" and of "quasi-facts". Both are abundant in the Web and telling them apart is not as easy as it may sound.

The third change is in the entire process of teaching and learning.

Here it is clear that the change must be profound and multifaceted, but it is equally clear that, due to the ultraconservative nature of the educational system, it has not yet happened on a large scale.

The Internet brings to us art treasures, ability to simulate complex experiments, mechanisms of learning by trial and error, explanations and lessons from the greatest teachers on earth, special aids for children of special needs, less need to memorize facts and numbers, and numerous other incomparable marvels, not available to previous generations. Anyone involved in teaching, from kindergarten to graduate school, must be aware of the endless opportunities, as well as of the lurking dangers. These changes in learning, when they materialize, may create an entirely different pattern of knowledge, understanding and thinking in the student mind.

I am personally amazed by how little has changed in the world of education, but, whether we like it or not, the change must happen and it will happen. It may take another decade or two, but education will never be the same. An interesting follow-up issue, to this last comment, is the question whether the minds and brains of children growing up in an Internet inspired educational system, will be physically "wired" differently than those of earlier generations. I tend to speculate in the affirmative, but this may only be answered by the Edge question of 2040.

juan_enriquez's picture

Managing Director, Excel Venture Management; Co-author (with Steve Gullans), Evolving Ourselves

The most important impact on my life and yours is that the Internet grants immortality. Think of your old archaeology/sociology/history course, or your visits to various museums. Think of how painstakingly arrowheads, outhouses, bones, beads, textiles, sentence fragments etc. have been discovered, uncovered, studied, and preserved.

But these few scraps have provided real knowledge while leaving large lagoons filled with conjecture, theories, speculation and outright fairy tales. Despite this, we still know an awful lot about a very few.

Because most of our knowledge of the past depends on very little about very few, the story of very few lives survives.

As we got better at transmitting and preserving data, we learned quite a bit more about many more.

Biographies could rely not just on letters, songs, and folk tales but on increasingly complete business ledgers, bills of sale, newspapers, and government and religious records.

By the time of the last great typhoid epidemics and fires in the U.S. and Europe, we could trace the history of specific houses, families, wells, cows, and outhouses. We could build a specific history of a neighborhood, family, and individual. But there were still very large lagoons in our knowledge. Not so today. Any electronic archaeologist, sociologist or historian examining our e-lives would be able to understand, map, computer, contrast, and judge our lives in a degree of detail incomprehensible to any previous generation. Think of a single day of our lives. Almost the first thing that happens after turning off an alarm clock, before brushing teeth, having coffee, seeing a child, or opening a paper is reaching for that phone, iPhone, or Blackberry. As it comes on and speaks to us or we speak through it, it continues to create a map of almost everything in our lives.

Future sociologists and archaeologists will have access to excruciatingly detailed pictures on an individual basis of what arrived, what was read, ignored, deleted, forwarded and responded to. Complement this stream of data with Facebook, Twitter, Google, blogs, newspapers, analyst reports, Flickr, and you get a far more concrete and complete picture of each and every one of us than even the most extraordinary detail found by historians on the most studied, respected and reviled of leaders.

And by the way, this cache is decentralized. It exists and multiplies at various sites. Digging through the Egyptian pyramids will look like child’s play compared to what future scholars will find at Google, Microsoft, the NSA, the credit bureaus or any host of parallel universes.

It is virtually impossible to edit or eliminate most traces of our lives today and for better or worse, we have now achieved that which the most powerful Egyptians and Greeks always sought â€" immortality.

So how has this new found immortality affected my thinking? Well those of a certain age learned long ago, from the triumphs and tragedies of Greek Gods, that there are clear rules separating the mortal and immortal. Trespasses tolerated and forgiven in the fallible human have drastic consequences for Gods. In the immortal world all is not forgiven and mostly forgotten after you shuffle off to Heaven.

howard_gardner's picture

Hobbs Professor of Cognition and Education, Harvard Graduate School of Education; Author, A Synthesizing Mind

The Internet has changed my life greatly, but not in a way that I could have anticipated, nor in the way that the question implies. Put succinctly, just as if a newly discovered preliterate tribe had challenged my beliefs about human language and human culture, the Internet has altered my views of human development and human potential.

Several years ago, I had a chance conversation with Jonathan Fanton, then President of the MacArthur Foundation. He mentioned that the Foundation was sponsoring a major study, to the tune of 50 million dollars, of how young people are being changed by the new digital media, such as the Internet. At the time, as part of our GoodWork research Project, I was involved in studies of ethics and focusing particularly on the ethical orientation of young people. And so I asked Pres. Fanton "Are you looking at the ways in which the ethics of youth may be affected?" He told me that the Foundation had not thought about this issue. After several conversations and a grant application, our GoodPlay project, a social science study of ethics in the digital media, was launched.

Even though I myself am a digital immigrant—I sometimes refer to myself as a digital paleolith—I now spend many hours a week thinking about the ways in which nearly all of us—young and old—are affected by being on line, networked, surfing, or posting for so much of the day. I've become convinced that the "digital revolution'' might be as epochmaking as the invention of writing or, certainly, the invention of printing or of broadcast. While I agree with those who caution that it is premature to detail what the effects might be, it is not too early to begin to think, observe, reflect, conduct pivotal observations and experiments. Indeed, I wish that social scientists, and/or other observers had been around, when earlier new media of communication had debuted.

Asked for my current thinking, I would make the following points. The lives and minds of young people are far more fragmented than at earlier times. This mutipliicity of connections, networks, avatars, messages, may not bother them but certainly makes for identities that are more fluid and less stable. Times for reflection, introspection, solitude, are scarce. Longstanding views of privacy and ownership/authorship are being rapidly undermined. Probably most dramatically, what it has meant for millennia to belong to a community is being totally renegotiated as a result of instant 24-7 access to anyone who is connected to the Internet. How this will affect intimacy, imagination, democracy, social action, citizenship, and other staples of human kind is up for grabs.

For older persons (even older than I am), the digital world is mysterious. For those of us who are middle aged or beyond, we continue to live in two worlds—the pre-digital and the digital—and we may either be nostalgic for the days without blackberries or relieved that we no longer have to trudge off to the library. But all persons who want to understand their children or their grandchildren must make the effort to "go native,"and at such times, we digital immigrants or digital paleoliths can feel as fragmented, as uncertain about privacy, as pulled by membership in diverse, and perhaps incommensurate communities, as any 15 year old.

Posted by: jamimckerne0193279.blogspot.com

Source: https://www.edge.org/responses/how-is-the-internet-changing-the-way-you-think

Post a Comment

Previous Post Next Post