Advocates of the concept called "singularity" envision a future in which humans and technology fully converge, but a keynote speaker at the World Future Society conference in the US last month voiced scepticism about the idea, citing the complexities of the human mind.
Proponents of singularity claim in 20 years nanotechnology implanted in people will repair wounds and advanced robots will assist with daily tasks. The concept ultimately calls for people to transcend the limits of biology by using technology to develop into something more advanced and intelligent than human genetics allows.
Wendell Wallach, a scholar at Yale University's Interdisciplinary Centre for Bioethics, supports technology but labels himself a "friendly sceptic" on this marriage of people and machines.
While he is "excited by where the science will take us," Wallach, who spoke at the World Future Society conference, is a "sceptic because we don't know enough about humans to pull it off".
Wallach's critique of singularity focused on areas including understanding the intricacies of the mind, the complexities of developing robots with morals and the question of who is responsible when a robot's morals prove problematic.
He said the brain is engaged in massive parallel thinking and that researchers do not fully grasp how this part of the body operates. He compared this ability to a computer, in which "one bit is out of place and Windows locks up".
He also said that computers face barriers in dealing with vision, language and locomotion.
Even if the body's detailed biological interactions can be replicated in a machine, computers may require a consciousness to complete tasks, Wallach said. However, we do not fully appreciate the complexities of humans so we don't know how difficult it will be to instill consciousness in a computer.
In addition, as robots handle more autonomous tasks they may require ethics and social skills, Wallach said. Introducing morals into machines raises the issue of whose morals are used and how machines learn their ethos.
Social skills would prove valuable if a robot was assigned to deliver medicine to a patient and could tell if the person was scared, for instance, Wallach said.
Despite the issues that accompany technology, society cannot stop its development. However, using technology to give people abilities not found in their genes raises the question of whether the process will lead to enhanced evolution or de-evolution, he said.
"Are we inventing the human species as we know it out of existence?" he asked.
An assessment of technology can determine risks and rewards. However, "risk assessment tools are very weak" and Wallach proposed the creation of a system that determines when "near dangers are on the horizon."
While robots with morals present long-term technology issues, in the near future Wallach calls for a deeper look at how humans and technology are developing.
"Everyone acknowledges that we are in the midst of a huge technology and human shift," he said. "We have no one looking at this comprehensively. We need to think about the various terms of how this technological development will take place, now and in the long term."