The Life Engineers: Prometheus Asks, Is a Culture as Stupid as Ours Ready to Create New Life?

Photo: Courtesy Twentieth Century Fox.

Scientific America answers almost all my question even how crazy it may be. One of the crazy question question that poped in my mind after watching Prometheus was that Is our immature and highly stupid culture ready to father a new life form like the engineers in the film?? and voila i found my answer in the guest blog section of scientific america a terrific article written by daniel grushnkin and wythe marschall.

Pinning down exactly what Ridley Scott’s larger-than-life Prometheus means may be impossible. But it’s safe to say that the movie – the 3-D quasi-prequel to Scott’s seminal technoscience-horror fable, Alien – is self-consciously a myth for our scientific era.

Prometheus opens over the shoulder of an alabaster figure on the edge of a prehistoric waterfall. This alien, called an Engineer, drinks a poison and falls into the waters. Our camera-eye follows, diving into his cells, which are darkening and cracking apart. Then we dive further – into his very DNA, which is rapidly rotting and unwinding, but not disappearing. We are left unsure where his broken-down DNA is headed. Cue title sequence.

Why does Scott open with an act of alien genesis triggered by crumbling, black DNA? Regardless of what else the filmmakers want the opening scene to convey, this is a horror movie. Its opening suggests that something about DNA and DNA manipulation is a source of dread – even as society today embraces biotechnology.

The biggest questions Prometheus asks may be, if DNA is a type of code or a language, who wrote the code of human life, and what did they intend for us? Our discomfort with the fact that manipulating DNA is a technique like any other – something you can learn and exploit for useful, perhaps lucrative ends – is driven by concerns over the motivations of those doing the manipulating.

If we were coded by authors with a motivation – the Engineers in Prometheus – can we be sure that we’re acting on our own volition? Our discomfort with being “programmed” is logical, because free will presents us with a reason to behave more responsibly (you can’t blame anyone else) and a gift of constant discovery.

The Engineers are not new to our trove of archetypes. In a way, they’re unknowable like God, but in another way they’re just a larger, paler version of ourselves. Scott makes this clear when his archeologists discover that we share genomes. Humans are the Engineers’ golems, Pinocchios, and Frankenstein’s creatures. The horror that comes with this discovery is that our makers are just about as ungodlike as we are.

The movie isn’t really about the Engineers, however. The Engineers are us. Do we want life to be given purpose by people as fallible, silly, vain, and stupid as we are? Of course, we’ve been bioengineering already. Since the advent of agriculture and animal husbandry, we’ve shaped the genetic direction that species take. Think: wolves into Pomeranians. But breeding has always been a slow, sloppy form of programming.

So here we are at the gates to the age of biotechnology, where scientists bioengineer yeast and bacterial cells to produce materials for us like medicines and plastics, where they use genetically engineered viruses to manipulate the brain cells of mice as if neurons were the strings of a marionette. The moment when bioengineering becomes indistinguishable from computing is coming. Companies like Autodesk are already developing bioCAD software, and undergraduate students are doing bioengineering for their summer projects. With the Engineers – humanity’s Gepetto – Prometheus offers a slant on where this all might lead.

Sure, just as the Engineers, we’ll weave our own messy psychology into the life we make. But will we also leave something out? That is, free will – the ability to desire, act, and react in new ways. When behaviors are programmed, free will gets lost. Perhaps that’s okay when we think about a lowly E. coli churning out fuel. But we feel disgust the more we can identify with the organism we’re programming. Imagine owning a bioengineered dog that never veered from its hardwired instructions; it would be more of an appliance than a pet. And a completely programmed human being? This is the stuff of horror.

Photo: Courtesy Twentieth Century Fox.

Science fiction’s forecasts have many times hit dead-on (submarines, space travel, computer hacking). Stories that focus on biotech often come back to a central theme, perhaps the predominant question of our age: When DNA becomes just another toolset, what will separate us from any other object that’s made? And what’s transcendent about life if we can design it ourselves?

In the biotech era, the creation of new life may be the ultimate source of bio-angst-the feeling that there’ll never be a satisfactory reason for our existence. If life is just stuff to be worked into new forms, then nothing separates life from ore, plastic, or anything else we can manipulate with tools.

Scott’s Prometheus shows that – as a culture forced to make increasingly difficult decisions regarding science – we haven’t escaped the debate central to Mary Shelley’s Frankenstein or what she called The Modern Prometheus (no coincidence). Shelley’s creature – abandoned by his genius creator – asks, why did you make me? Scott’s movie just reverses the equation. In Prometheus, we are the creature, abandoned, and we want the Big Answer.

Prometheus is not simply the most recent, biggest-budget story of humanity’s bio-angst – it’s also the story that comes at a time when humanity is finally able to test the reasons for its angst empirically. Here at the opening of the biotech era, we’re both excited and afraid of what our future holds.


What are the science ugliest experiments???

Walter Freeman and Transorbital Lobotomies

In 1949, the Portuguese neurologist Egas Moniz won a Nobel Prize for inventing the lobotomy, a treatment for mental illness that called for inserting a sharp instrument into holes drilled through the skull and destroying tissue in the frontal lobes. By then, physician Walter Freeman Jr., (father of neuroscientist Walter Freeman III, a leading consciousness researcher) had already begun carrying out lobotomies in the United States. In 1941 Freeman lobotomized the unruly, 23-year-old sister of John F. Kennedy; Rosemary Kennedy was so severely disabled after her lobotomy that she required care for the rest of her life. Freeman later invented the transorbital lobotomy, which involved slipping an ice pick past the eyeball, thrusting it through the rear of the eye socket and swishing it back and forth in the brain. In the 1950s, Freeman drove across the U.S. and Canada in a station wagon, which he called the “Lobotomobile,” performing as many as 25 transorbital lobotomies a day on patients at mental hospitals—often after knocking them out with electroshock therapy. Three patients at an Iowa hospital died on the same day after he operated on them, according to “The Lobotomist,” a 2009 documentary. Freeman nonetheless kept practicing lobotomies—as many as 5,000 in all–until 1967, when (as I have reported elsewhere) one of his patients died of a cerebral hemorrhage. In 1949 The New York Times hailed Moniz and other lobotomists for helping us “to look with less awe at the brain. It is just a big organ…no more sacred than the liver.” Until his death in 1972, Freeman insisted that lobotomies had helped most of his patients. But as the medical historian Edward Shorter has noted: “Freeman’s definition of success is that the patients are no longer agitated. That doesn’t mean that you’re cured, that means they could be discharged from the asylum, but they were incapable of carrying on normal social life. They were usually demobilized and lacking in energy. And they were that on a permanent basis.“

The Biggest U.S. H-Bomb Test Ever

On March 1, 1954, in a test code-named “Castle Bravo,” the U.S. detonated a thermonuclear bomb on Bikini Atoll, one of the Marshall Islands, in the Pacific Ocean. Physicists at Los Alamos National Laboratory, where the bomb was designed, estimated that it would have a yield equivalent to 5 million tons, or megatons, of conventional high explosives. The yield turned out to be 15 megatons, 1,000 times more than the fission bombs that destroyed Hiroshima and Nagasaki in 1945. The explosion gouged a crater more than a mile wide out of Bikini, ballooned into a fireball more than four miles across and spewed radioactive debris so high into the atmosphere that it ended up spanning the globe. Inhabitants of other Marshall Islands, 100 miles or more from Bikini, suffered from radiation poisoning, as did 23 men on a Japanese fishing boat, the “Lucky Dragon,” 80 miles from ground zero. One man on the Lucky Dragon died months after returning to port. Before Bravo, U.S. officials apparently worried that prevailing winds might carry fallout over inhabited areas but decided to proceed with the test. Bravo remains the biggest U.S. nuclear explosion, but its yield was less than a third that of the Tsar Bomba, detonated by the Soviet Union in 1961. Public concerns over these enormous explosions led to a ban on atmospheric testing in 1963, but the arms race continued. Today, according to the Stockholm International Peace Research Institute, eight nations possess a total of more than 20,000 nuclear weapons.

Can a Brain Implant Make a Gay Man Straight?

The psychiatrist Robert Health, who headed the department of psychiatry and neurology at Tulane University from 1949 to 1980, did pioneering research on the potential of electrical stimulation of the brain to treat schizophrenia and other disorders. . In a paper published in 1972 in The Journal of Nervous and Mental Disease, Heath described an experiment on a 24-year-old male homosexual with a history of epilepsy, depression, and drug abuse. The man, whom Heath called patient B-19, was facing charges for marijuana possession when he agreed to serve as Heath’s subject. Heath drilled a hole in B-19′s skull and inserted an electrode in the septal region of his brain, which is associated with pleasure. B-19 could stimulate himself by pressing a button on a hand-help device. B-19, who according to Heath had never had heterosexual intercourse and found it “repugnant,” stimulated himself to the point of orgasm while watching a heterosexual porn film and, later, having intercourse with a 21-year-old female prostitute supplied by Heath. The patient “achieved successful penetration, which culminated in a highly satisfactory orgiastic response, despite the milieu and the encumbrances of the lead wires to the electrodes,” Heath wrote. One wonders what an institutional review board would say about Heath’s research today.

Dosing Kids with Psychiatric Meds

Are the days of ugly research over? If only. In the past two decades, American psychiatrists have been carrying out what is in effect an enormous clinical trial involving millions of children. Physicians are medicating children with stimulants such as Ritalin, antidepressants such as Prozac, anti-anxiety drugs such as Xanax, bipolar drugs such as lithium and antipsychotics such as Risperdal. “It’s really to some extent an experiment, trying medications in these children of this age,” child psychiatrist Patrick Bacon told producers of the 2008 PBS documentary “The Medicated Child.” “It’s a gamble. And I tell parents there’s no way to know what’s going to work.” As of 2009, more than 500,000 American adolescents and children, including toddlers younger than two, were taking antipsychotics, which “may pose grave risks to development of both their fast-growing brains and their bodies,” according to The New York Times. In Anatomy of an Epidemic (Crown, 2010), journalist Robert Whitaker presents evidence that psychiatric drugs may be hurting more children than they help. Since 1987, he reports, while prescriptions for children have soared, the number of patients under 18 receiving federal disability payments for mental illness has multiplied by a factor of 35. By this measure, the experiment does not seem to be working.

The list is very very long but these are the few experiments that were unraveled quite recently. So they are on this list.

How Critical Thinkers Lose Their Faith in God

Religious belief drops when analytical thinking rises.

Why are some people more religious than others? I always wondered why i am not able to accept the presence of a greater force. I know that it exists, then why i am not able to accept it??  Answers to this question often focus on the role of culture or upbringing.  While these influences are important, new research suggests that whether we believe may also have to do with how much we rely on intuition versus analytical thinking. In 2011 Amitai Shenhav, David Rand and Joshua Greene of Harvard University published a paper showing that people who have a tendency to rely on their intuition are more likely to believe in God.  They also showed that encouraging people to think intuitively increased people’s belief in God. Here is an abstract from the paper:

Some have argued that belief in God is intuitive, a natural (by-)product of the human mind given its cognitive structure and social context. If this is true, the extent to which one believes in God may be influenced by one’s more general tendency to rely on intuition versus reflection. Three studies support this hypothesis, linking intuitive cognitive style to belief in God. Study 1 showed that individual differences in cognitive style predict belief in God. Participants completed the Cognitive Reflection Test (CRT; Frederick, 2005), which employs math problems that, although easily solvable, have intuitively compelling incorrect answers. Participants who gave more intuitive answers on the CRT reported stronger belief in God. This effect was not mediated by education level, income, political orientation, or other demographic variables. Study 2 showed that the correlation between CRT scores and belief in God also holds when cognitive ability (IQ) and aspects of personality were controlled. Moreover, both studies demonstrated that intuitive CRT responses predicted the degree to which individuals reported having strengthened their belief in God since childhood, but not their familial religiosity during childhood, suggesting a causal relationship between cognitive style and change in belief over time. Study 3 revealed such a causal relationship over the short term: Experimentally inducing a mindset that favors intuition over reflection increases self-reported belief in God.

Based on this, another paper was published in Science Magazine by Will Gervais and Ara Norenzayan of the University of British Columbia found that encouraging people to think analytically reduced their tendency to believe in God. Together these findings suggest that belief may at least partly stem from our thinking styles. (Well obviously but read on). Here is an abstract from this paper too:

Scientific interest in the cognitive underpinnings of religious belief has grown in recent years. However, to date, little experimental research has focused on the cognitive processes that may promote religious disbelief. The present studies apply a dual-process model of cognitive processing to this problem, testing the hypothesis that analytic processing promotes religious disbelief. Individual differences in the tendency to analytically override initially flawed intuitions in reasoning were associated with increased religious disbelief. Four additional experiments provided evidence of causation, as subtle manipulations known to trigger analytic processing also encouraged religious disbelief. Combined, these studies indicate that analytic processing is one factor (presumably among several) that promotes religious disbelief. Although these findings do not speak directly to conversations about the inherent rationality, value, or truth of religious beliefs, they illuminate one cognitive factor that may influence such discussions.

Gervais and Norenzayan’s research is based on the idea that we possess two different ways of thinking that are distinct yet related. Understanding these two ways, which are often referred to as System 1 and System 2, may be important for understanding our tendency towards having religious faith. System 1 thinking relies on shortcuts and other rules-of-thumb while System 2 relies on analytic thinking and tends to be slower and require more effort. Solving logical and analytical problems may require that we override our System 1 thinking processes in order to engage System 2. Psychologists have developed a number of clever techniques that encourage us to do this. Using some of these techniques, Gervais and Norenzayan examined whether engaging System 2 leads people away from believing in God and religion.

For example, they had participants view images of artwork that are associated with reflective thinking (Rodin’s The Thinker) or more neutral images (Discobulus of Myron). Participants who viewed The Thinker reported weaker religious beliefs on a subsequent survey. However, Gervais and Norenzayan wondered if showing people artwork might have made the connection between thinking and religion too obvious. In their next two studies, they created a task that more subtly primed analytic thinking. Participants received sets of five randomly arranged words (e.g. “high winds the flies plane”) and were asked to drop one word and rearrange the others in order to create a more meaningful sentence (e.g. “the plane flies high”). Some of their participants were given scrambled sentences containing words associated with analytic thinking (e.g. “analyze,” “reason”) and other participants were given sentences that featured neutral words (e.g. “hammer,” “shoes”). After unscrambling the sentences, participants filled out a survey about their religious beliefs. In both studies, this subtle reminder of analytic thinking caused participants to express less belief in God and religion. The researchers found no relationship between participants’ prior religious beliefs and their performance in the study. Analytic thinking reduced religious belief regardless of how religious people were to begin with.

In a final study, Gervais and Norenzayan used an even more subtle way of activating analytic thinking: by having participants fill out a survey measuring their religious beliefs that was printed in either clear font or font that was difficult to read. Prior research has shown that difficult-to-read font promotes analytic thinking by forcing participants to slow down and think more carefully about the meaning of what they are reading. The researchers found that participants who filled out a survey that was printed in unclear font expressed less belief as compared to those who filled out the same survey in the clear font.

These studies demonstrate yet another way in which our thinking tendencies, many of which may be innate, have contributed to religious faith. Since System 2 thinking requires a lot of effort, the majority of us tend to rely on our System 1 thinking processes when possible. Evidence suggests that the majority of us are more prone to believing than being skeptical.

Why and how might analytic thinking reduce religious belief? Although more research is needed to answer this question, Gervais and Norenzayan speculate on a few possibilities. For example, analytic thinking may inhibit our natural intuition to believe in supernatural agents that influence the world. Alternatively, analytic thinking may simply cause us to override our intuition to believe and pay less attention to it. It’s important to note that across studies, participants ranged widely in their religious affiliation, gender, and race. None of these variables were found to significantly relate to people’s behavior in the studies.

Gervais and Norenzayan point out that analytic thinking is just one reason out of many why people may or may not hold religious beliefs. In addition, these findings do not say anything about the inherent value or truth of religious beliefs—they simply speak to the psychology of when and why we are prone to believe. Most importantly, they provide evidence that rather than being static, our beliefs can change drastically from situation to situation, without us knowing exactly why.

What do i think as a reader??? I still the hell dont know…..

What about you???

The Trouble With Wi-Fi

To most people, Wi-Fi is something of a miracle. Within 150 feet of some hidden base station, your laptop, tablet or phone can get online at cable-modem speeds—wirelessly.

But Wi-Fi is also something of a mystery. So here is a list of most common questions and answers given by expert experts i found on issue of scientific america.

Often my laptop detects a four-bar Wi-Fi hot spot, but I can’t get online. What gives?
In the mid-1990s Alex Hills built a huge wireless network at Carnegie Mellon University that became the prototype for modern Wi-Fi networks—a story he tells in his book Wi-Fi and the Bad Boys of Radio. I figured that he would be perfect for this one. His explanation:

“Two issues might cause this. First, radio problems. The bars are an indication of how strong the Wi-Fi signal is, but they don’t tell you anything about interference or other radio problems that can corrupt a strong signal.

“Second, most Wi-Fi systems connect to wired networks that connect you to the Internet. But there may be problems in these wired networks: problems with link speeds, switches, routers, servers, and the like. You have a good Internet connection only when all of the links in the chain are doing their jobs.”

Why do expensive hotels charge for Wi-Fi but inexpensive hotels don’t?
Don Millman’s company, Point of Presence Technologies, runs the Wi-Fi for 150 hotels. His answer:

“Expense accounts: higher-end hotels attract business travelers who expense their stays, so the fee matters less to them.”

We’re frequently warned about the hazards of using free open hot spots, like the ones at coffee shops. What, exactly, is the risk?
Glenn Fleishman has covered networking for more than a decade (currently on the Economist’s Babbage blog):

“A bad guy across the room might be running free software that sniffs every bit passing over the wire­less network and grabs passwords, credit card numbers, and the like.

“You don’t have to worry about banking and e-commerce Web sites; they’re protected by secure, encrypted connections.

“But without encrypting your e-mail and regular Web sessions, you never know if someone sitting within ‘earshot’ is slurping down your data for the purposes of identity theft or draining a bank account. My tip: always use a virtual private network (VPN) connection, which blocks anyone on the local network from seeing anything but scrambled data.”

What’s up with the “Free Public Wi-Fi” hot spot that sometimes shows up at hotels and airports—even on planes—­but that rarely yields any actual connection?
I’ll field this one: Don’t bother trying to connect to “Free Public Wi-Fi” (or “hpsetup” or “linksys”). It’s never a working Wi-Fi hot spot. It’s actually a viral “feature” of Windows XP running amok.

Whenever Windows XP connects to Wi-Fi, it also broadcasts that hot spot’s name to other computers as an “ad hoc” (PC-to-PC) network so that they can enjoy the connection, too. Someone, somewhere, once created a real hot spot called Free Public Wi-Fi, probably as a prank. Ever since, that name has been broadcast wirelessly from one Windows computer to another. (Macs see the phony hot spot, too, but don’t rebroadcast it.)

In public places, people try and fail to connect—but now their PCs start rebroadcasting this ad hoc network’s name, too, and on and on it goes. Best bet: don’t connect.