Aaron Saenz, Author at Singularity Hub https://singularityhub.com/author/aaron/ News and Insights on Technology, Science, and the Future from Singularity Group Sun, 11 Aug 2019 14:05:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Aaron Saenz, Author at Singularity Hub https://singularityhub.com/author/aaron/ 32 32 4183809 Graphene Shows Promise for Repairing Broken Bones https://singularityhub.com/2019/03/04/graphene-shows-promise-for-repairing-broken-bones/ Mon, 04 Mar 2019 15:00:01 +0000 https://singularityhub.com/?p=128121 When you were a kid, did you ever sign a classmate’s cast after they broke an arm or a leg? Your name would be on display there for the rest of the semester. Broken bones are one of the worst trade-offs in childhood—a few seconds of calamity followed by months of boring rest and recovery. But children in the future may have a different story to tell as emerging tech overhauls how we fix broken bones.

Carbon nanomaterials may have the power to heal bones faster than a Harry Potter fan can say ‘Brackium Emendo!’ Researchers from Stefanie A. Sydlik’s team at Carnegie Mellon University have tested a new formulation of graphene that is biodegradable, mimics bone, attracts stem cells, and ultimately improves how animals can repair damage to their skeletons.

As reported in PNAS, this phosphate graphene serves as a scaffold, allowing the body’s own cells to more rapidly reform the missing or damaged bone. The technique has already shown success in mice. As this technology matures it could become a vital part of orthopedic medicine, helping us recover faster with stronger, healthier bones.

Cast on the Outside, Scaffold on the Inside

The cornerstone of traditional orthopedic medicine has always been to immobilize bone breaks and allow the body to repair itself. Thankfully, our bodies do a great job repairing bones; with proper setting and enough time, bones can mend even very serious damage, turning out almost as good as new.

Modern physical therapy and recovery techniques have enhanced this “set it and forget it” approach by exploring how activity, diet, and rest can be balanced to get the best results with a broken bone. Truly traumatic injuries can require surgeries to install pins, plates, and other structures which mandate longer recovery times, more physical therapy, and quite frankly, way too much pain. There’s room for improvement overall, but especially in these most dramatic cases.

Sydlik’s research into graphene scaffolds represents the modern approach to orthopedics: going inside the body to maximize recovery from within. When the graphene is placed on and around the broken bone tissue, it serves as a structure for bone cells to bond and grow. Think of it like the wooden lattice you put up in a garden to encourage vines to climb and flourish. Unlike the garden lattice, the graphene scaffold is broken down as the bone cells grow in its place, effectively disappearing as the body repairs the injury. It’s the perfect patch, performing its job and leaving little behind.

A New Idea Made Even Better

The scaffolding approach isn’t new, but this study shows improvements in the design, formulation, and production of the phosphate graphene. Better nanotech methodology may not be very exciting, but it’s a big deal when your end goal is a practical health product that should be easy to make and use.

The scaffold is also highly customizable—attracting the right calcium ions, having a specific tensile strength, and other physical properties can be ‘programmed’ into the material as it is made, yielding a material that mimics real bone as closely as possible.

Perhaps most importantly, the study showed that the scaffold can work with or without the assistance of stem cells (in this case, bone marrow stromal cells, BMSCs). Most other forms of regenerative scaffold technology have relied on BMSCs to accelerate repair.

The phosphate graphene, however, provides a structure for normal bone cells to grow on and encourages them to do so. Being able to work without BMSCs means this technology would require less complex treatment plans when used in the real world.

Sooner Is Better Than Best

There are other technologies out there that could cure broken bones better than a scaffold, like printable cells, nanites, or cybernetics. But all of these technologies are much further from reaching the public. Phosphate graphene scaffolds would also integrate well with current medical procedures and care programs.

Once graphene scaffolding becomes an accessible part of healthcare, its real potential will arrive. Graphene is just carbon atoms arranged in a neat pattern, but the potential to vary the molecular composition is nearly infinite.

As researchers continue to develop it, phosphate graphenes (or similar graphene derivatives) could be further customized and optimized with a wide range of physical and chemical properties.  Scaffolds that attract more stem cells, produce stronger bones, or pre-emptively deal with future breaks are all possible—that is to say, we haven’t even scratched the surface.

Image Credit: Puwadol Jaturawutthichai / Shutterstock.com

]]>
128121
The Pediatric AI That Outperformed Junior Doctors https://singularityhub.com/2019/02/20/the-pediatric-ai-that-outperformed-junior-doctors/ Wed, 20 Feb 2019 15:00:08 +0000 https://singularityhub.com/?p=128014 Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.

Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.

The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.

The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.

Less Like a Computer, More Like a Person

To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.

Like traditionally-trained pediatricians, the AI  broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.

Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.

When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.

Helping, Not Replacing

While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.

That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.

Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.

Closer to Us, But Still Dependent on Us

No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.

In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.

Data, and More Data

That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.

In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.

Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.

Image Credit: Nadia Snopek / Shutterstock.com

]]>
128014
How Today’s Jungle of Artificial Intelligence Will Spawn Sentience https://singularityhub.com/2016/08/09/how-todays-jungle-of-artificial-intelligence-will-spawn-sentience/ https://singularityhub.com/2016/08/09/how-todays-jungle-of-artificial-intelligence-will-spawn-sentience/#respond Tue, 09 Aug 2016 15:00:41 +0000 https://singularityhub.com/?p=92113 From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It’s usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published August 10, 2010. We hope you enjoy it! 


You don’t have a flying car, jetpack, or ray gun, but this is still the future. How do I know? Because we’re all surrounded by artificial intelligence. I love when friends ask me when we’ll develop smart computers…because they’re usually holding one in their hands. Your phone calls are routed with artificial intelligence.

Every time you use a search engine you’re taking advantage of data collected by ‘smart’ algorithms. When you call the bank and talk to an automated voice you are probably talking to an AI…just a very annoying one. Our world is full of these limited AI programs which we classify as “weak” or “narrow” or “applied.”

These programs are far from the sentient, love-seeking, angst-ridden artificial intelligences we see in science fiction, but that’s temporary. All these narrow AIs are like the amino acids in the primordial ooze of the Earth. The ingredients for true human-like artificial intelligence are being built every day, and it may not take long before we see the results.

How did we create the jungle of AI that surrounds us today?

Let me answer that question with someone’s else’s question. During the panel discussion for the Transcendent Man documentary about Ray Kurzweil at the Tribeca Film festival, a viewer asked the futurist if there would be another explosion of AI that leads us to the singularity. Another explosion?

Yes, you see, back in the late 80s scientists started rethinking the way they pursued AI. Rodney Brooks of MIT (also one of the founders of iRobot) took a new approach. Instead of developing AI from the top-down he looked at building things from the bottom up. Instead of artificial reasoning, he looked at artificial behavior.

The result were robots that based their actions upon basic instincts and patterns. iRobot’s Roomba doesn’t vacuum a floor with high-level reasoning about how the carpet should eventually look, it performs a bunch of different cleaning patterns until it knows the whole carpet’s dirt-free.

That’s behavior-based AI, and it’s powerful stuff.

Along with increased processing power, artificial intelligence really took off in the 90s. Using modular and hierarchical techniques like Brooks’ behavior-based approach, researchers were able to create a bunch of AIs that did things. These weren’t philosopher programs, they worked for a living. Data mining, inventory tracking and ordering, image processing — these jobs all started falling to AIs that built simple patterns into algorithms that could handle dynamic tasks.

Now that list of tasks has expanded. We’re slowly building a library of narrow AI talents that are becoming more impressive. Speech recognition and processing allow computers to convert sounds to text with greater accuracy. Google is using AI to caption millions of videos on YouTube.

Likewise, computer vision is improving so that programs like Vitamin D Video can recognize objects, classify them, and understand how they move. Narrow AI isn’t just getting better at processing its environment, it’s also understanding the difference between what a human says and what a human wants.

Programs like BlindType compensate for human input error, and next generation phone-answering services convert your requests into commands. By assigning different situations values, narrow AIs can make choices that maximize their rewards, an approach that let the ASIMO robot figure out the best way through an obstacle course.

Artificial intelligence is also getting better at analyzing large sets of data and synthesizing new data that fits the set, which we’ve seen in programs that write music or create new art.

SuperComputers
Kurzweil and others predict the continued growth of processing power, which will help enable a human-like artificial intelligence. (Editor’s note: This exponential trend in supercomputing power has continued through today.) Image Credit: Ray Kurzweil/The Singularity Is Near

These are the building blocks for the next explosion of AI tools.

Do you want a security guard AI? Computer vision plus interpretation of human actions? How about a program that answers your toddler’s endless questions?

Speech recognition plus interpretation of human actions plus a large database of knowledge plus creation of new datasets (we’ve already seen it work for Jeopardy!)?

Of course, things aren’t simply plug ‘n play at this point, but you can see that as each application of narrow AI is perfected it can feed into a more complex task.

There are three key factors that will enable the creation of a strong artificial intelligence that can think like a human being.

  • We need greater computer power to match and mimic the brain.
  • We need to better understand the hardware of the brain and the way it processes information.
  • We need to find ways that an AI can approach higher and higher levels of problem solving.

Each of these requirements is on its way to being fulfilled.

Kurzweil (among others) predicts the continued exponential growth of processor power. The Blue Brain Project (and other research) is exploring the brain and seeking to simulate its functions. I think that the growing presence of narrow AI speaks to the third need.

There are many different approaches to AI research, and not all of them are compatible, but as we develop more and more programs that can handle simple decision making I think we are building a library of problem solving that will eventually develop into a human-like hierarchical reasoning structure.

When is the first sentient computer lifeform going to arrive? I have no idea.

But the seeds of its birth are scattered through the advanced technologies we use every day. So pick up your smart phone while traveling on a moving train, call an international bank, and ask the artificial voice that answers to recite your last ten financial transactions.

You’ll be flexing the muscles of many different modern AIs, and you know that the exercise is good for the brain.


Image credit: Shutterstock

]]>
https://singularityhub.com/2016/08/09/how-todays-jungle-of-artificial-intelligence-will-spawn-sentience/feed/ 0 92113
Welcoming Your New Robot Overlords https://singularityhub.com/2013/07/16/welcoming-your-new-robot-overlords/ https://singularityhub.com/2013/07/16/welcoming-your-new-robot-overlords/#respond Tue, 16 Jul 2013 16:22:41 +0000 http://shhome.wpengine.com/?p=64659 Last year at the second annual Bay Area Art & Science Interdisciplinary Collaborative Sessions (BAASICS), I was invited to give a brief presentation on any future-related topic I wished. As a Singularity Hub alum there was only one real option: robots. More specifically, why the concept of the “robot uprising” is simultaneously ridiculous and prophetic.

Weaving together pop culture, industrial statistics, and techno-optimism in what I hope is an entertaining manner, this BAASICS talk aims to persuade even the most hard-headed Luddite to reconsider their stance and “welcome their new robot overlords”:

Many of the themes explored in this presentation were derived from the observations and articles created while I worked at Singularity Hub. My opinions on the absurdity of robots in mainstream fiction, the continuous advancement of industrial automation, and Ray Kurzweil’s persuasive graphs on the exponential growth of intelligence are all standard issue concepts for those of us who like to stay abreast of the cutting edge of modern technology. These same ideas, however, are far from ordinary to the majority of our 21st Century peers, which is why it was so rewarding to see how well my silly little robot presentation was received by the BAASICS audience.

BAASICS, now in its third year, is an ongoing effort to bridge the gap between science and the arts with colorful presentations, performances, and discussions — all provided for free to the San Francisco, and online, community. Like the juggernaut TED conference or the more indy BIL conference, BAASICS seeks to fire imaginations while informing attendees about the wondrous natural and technical world around them. Produced by Selene Foster and Christopher Reiger, BAASICS is an organization worth watching as online video continues to develop as a powerful tool to connect local innovators with global audiences.

As a prediction of the future, my presentation on robot overlords has little chance of standing the test of time. Even now, barely a year later, the statistics I quote and the examples I use seem outdated. Foxconn is well underway with its march towards a million industrial robots, and the latest wave of research quality machines are smarter, faster, and more creative than ever. If automation continues at this pace, I’ll soon be outdistanced by the reality of robotics.

It’s not the winner of the race, however, that really matters – it’s the destination. Whether it takes us a few decades or a few centuries, humanity should eventually arrive at a world where machines do the majority of the work, both physical and intellectual. Creativity — our last best hope for a human skill impervious to robotic emulation — is already showing signs of invasion. I’ll concede all my other points but this one: the future of labor belongs to machines. It’s only a matter of time.

As a people then, the only real question remaining to us is the big question I presented at BAASICS: when the robots take over, what will they be like?

Maybe it’s the optimist in me, maybe I’ve watched too many Kurzweil videos, or read too many tech articles online, but I think those robots will be, in part or whole, an extension of us. Homo sapiens are a species with a deeply symbiotic relationship with technology and I see no reason why that relationship would cease simply because the technology becomes more advanced. We may change, perhaps beyond all present recognition, but that bond between man and machine seems unbreakable.

In my own humble opinion, which I will peddle online, in person, and wherever else they let me speak, the future of automation is just another phase of humanity waiting to unveil itself. I have seen the robot overlords, and they are us.

]]>
https://singularityhub.com/2013/07/16/welcoming-your-new-robot-overlords/feed/ 0 64659
“Helping a Billion People” Began Last Night – Singularity University Opens 4th Summer Program https://singularityhub.com/2012/06/19/helping-a-billion-people-began-last-night-singularity-university-opens-4th-summer-program/ Tue, 19 Jun 2012 22:10:40 +0000 http://shhome.wpengine.com/?p=49318 opening ceremony feature
Singularity University’s fourth Graduate Studies Program began last night with excitement and aplomb. SU is the world’s premier educational institution in the field of understanding, and harnessing, exponential growth in science and technology. Cofounders Ray Kurzweil and Peter Diamandis, along with Program Director David Roberts, and keynote speaker Julie Hanna of KiVa, welcomed in the SU Class of 2012 with messages about the importance of personal experience, inspiration, creating opportunities, and positive mindset. The Class of 80 participants represents 36 nations from around the world, and Singularity University wants these emerging leaders to improve the lives of one billion people in the next decade. It’s a lofty goal, but one that is matched by the talent of SU’s participants and faculty, as well as the power of accelerating trends in technology.

The following video gives the highlights of the evening. Full length video of the entire ceremony will be available later from Singularity University and Singularity Hub. (Details will be forthcoming.)

The Graduate Studies Program lasts for ten weeks this Summer, and Singularity Hub will have reporters on campus right up to, and following, the Closing Ceremony in August. Stay tuned for regular updates on SU’s amazing faculty, phenomenal participants, and the incredible events they share during this intense educational process. As always, please feel free to request coverage of particular moments or themes and we’ll try our best to accommodate the public’s interest into this potentially life-changing institution as it rises to take its place on the world stage.

[image credits: Singularity University and Singularity Hub]

]]>
49318
Singularity University 2012 – The Graduate Studies Program is Near and We Have Insider Coverage https://singularityhub.com/2012/06/16/singularity-university-2012-the-graduate-studies-program-is-near-and-we-have-insider-coverage/ Sat, 16 Jun 2012 11:16:34 +0000 http://shhome.wpengine.com/?p=49150 SU Gsp 2012

Monday June 18th marks the start of Singularity University’s 10 week Summer Graduate Studies Program. Based in NASA AMES in the heart of Silicon Valley, SU has become the premier institution in educating the next generation of leaders about the disruptive power of accelerating technologies. Singularity University understands that the small advances in science today could compound into enormous changes in the near future. Individuals, businesses, and even nations will rise or fall depending on how they prepare for and harness those changes. SU wants its participants to use exponential growth in technology to help a billion people in the next ten years. For the next ten weeks Singularity Hub will be on campus, ready to provide you with an unparalleled look at the university that is poised to transform the world.

Starting with opening ceremonies this Monday, and continuing beyond closing ceremonies in August, Singularity Hub is going to have reporters on the ground at SU. We’ll interview students, faculty, and special guests. We’ll be at lectures, presentations, and exciting field trips (Google, AutoDesk, and more!). We’ll hear about the latest innovations in artificial intelligence, robotics, nanotechnology, genetics, synthetic biology, bioinformatics, energy, finance, and design. And we’ll be sharing it all with you, our Singularity Hub readers.

Why?

For three years now Singularity University has been quietly shifting the educational landscape with its unique perspective on shaping innovation and innovators. This really could be the institution that helps solve humanity’s grand challenges. The following video, created at the end of last summer’s Graduate Studies Program, gives you a taste of SU’s vision:

As seen in the video, there are three things that make Singularity University’s Graduate Studies Program stand out: the students, the faculty, and the mission. This year, over 3000 applied for one of the 80 slots in GSP 2012, and those that were chosen highlight the talent that can be found in such enormously competitive admissions. Two-thirds have post-graduate degrees, with 10% pursuing a PhD or equivalent. Many have already started successful businesses, written books, and been teachers themselves. It’s no wonder then that SU prefers to call them “participants” and not “students”. These individuals already have a lot to share with the world.

And SU is definitely a global university. In the GSP ’12 class alone there are 36 nations represented: Algeria, Argentina, Brazil, Bulgaria, Canada, Chile, China, Croatia, Czech Republic, Denmark, Dominican Republic, Estonia, France, Germany, Ghana, Hungary, India, Indonesia, Iran, Ireland, Israel, Italy, Latvia, Mexico, Netherlands, Nigeria, Pakistan, Palestine, Poland, Russia, Spain, Sudan, Sweden, Switzerland, UK, and the US.

SU global competitions
As part of their mission to find the best talent in the world, SU hosts competitions to find remarkable local innovators. 25% of the GSP’12 class comes from these competitions.

Perhaps most telling is that a full quarter of the class (20 participants) got their seats by winning regional competitions around the world. Singularity University guides Global Impact Competitions with support of local partners to find innovators ready to improve their native homes. These participants are already community leaders, SU is ready to give them the tools to make them global leaders.

That empowerment is possible because Singularity University has attracted some of the most prominent names in advanced technology. Daniel Kraft, head of SU’s Medicine and Neuroscience track, is a renowned researcher in regenerative stem cell technologies. Andrew Hessel, Co-Chair of the Biotech track is famous as an advocate for synthetic biology and next generation pharmaceuticals. Peter Norvig is Director of Research at Google, and an advisor in AI and Robotics. Dan Barry, Head of Faculty, is a former astronaut with doctorates in medicine and computer science. The list goes on and on.

SU 2011
The participants, faculty, staff, and founders of Singularity University at last year’s GSP. Don’t call them students and teachers, they’re on this journey together.

For ten weeks this summer, participants and faculty will walk side by side as they discuss what humanity needs, and what technology will provide in the decade ahead. This all culminates in the formation of project groups, each targeted to solve a major problem in the world that affects billions. Many of these projects will go on to become startup businesses, not just in Silicon Valley, but in emerging innovation centers around the world.

SU partners
A sign of SU’s growing momentum is their list of corporate partners including Google, Autodesk, Genentech, and Nokia.

The transition that many participants make from proven leaders to targeted entrepreneurs reflects the underlying mission of Singularity University. Founders Ray Kurzweil and Peter Diamandis wanted to create an institution that not only taught the world about accelerating technologies, but found and shaped the talent that would use those technologies to improve the lives of everyone on the planet. SU helps their participants aim for a “10^9+ impact”- helping a billion or more people in the next ten years. This organization wants nothing less than to solve some of humanity’s grand challenges such as poverty, energy, education, global health, security, and the exploration of space.

Many of you already know about the potential power of artificial intelligence, synthetic biology, nanotechnology, and other emergent technologies. Singularity Hub covers them everyday. But we hope that this summer we’ll be able to show you how Singularity University is actually taking what we discuss as possibilities, and planning to use them as tools to build humanity a better future.

Stay tuned, Hubbers, on top of all our regular (and awesome) content cake, we’re going to give you a lot of Singularity University icing. Bon appetit!

[image and video credits: Singularity University]

]]>
49150
Founders of Leap Motion: Our Amazing 3D Tracking Will Be Everywhere https://singularityhub.com/2012/06/13/founders-of-leap-motion-our-amazing-3d-tracking-will-be-everywhere/ https://singularityhub.com/2012/06/13/founders-of-leap-motion-our-amazing-3d-tracking-will-be-everywhere/#respond Wed, 13 Jun 2012 16:48:22 +0000 http://shhome.wpengine.com/?p=48891 Leap Motion demo
In the short time since its debut, the Leap Motion has inspired zombie-like devotion in many gadget lovers, but can the device live up to the hype? (Yes, yes it can).

In the past few weeks the Leap Motion device has sent shudders of delight through gadget lovers and computer designers alike by promising a new kind of ultra-accurate, and very cheap, optical 3D tracking for your desktop or laptop computer. Forget the Kinect, Leap Motion is cheaper ($70), more precise (down to 0.01 mm), and much smaller (think “pack of gum” proportions). The incredible demo for the Leap Motion (see below) shows how the desktop device can quickly detect hand motion so that a user needs merely wiggle their fingers in front of their computer to intuitively control what happens on the screen. Currently taking pre-orders, the Leap Motion is scheduled to ship between December and February, and with it will come a new market of third party apps designed to take full advantage of the device. I had a chance to sit down with CEO Michael Buckwald and CTO David Holtz and test out the Leap Motion first hand. If things go their way, the Leap Motion will become the “third input device” for computers, joining the keyboard and mouse in a new triumvirate of digital control.

For those who missed earlier coverage of the Leap Motion’s debut here’s the official promo video for the 3D tracking device:

While at their HQ, Singularity Hub got a lot of raw footage of the device in action. The following video is silent, but you can see many of the demos in much greater detail. Notice that the Leap Motion used here, and in all press demonstrations, is a prototype that is slightly larger than the form factor that is being sold. It is the flat black box laying in front of the keyboard. The commercial unit will also connect via USB (not wireless), and will be smaller, silver, and lighter weight. Buckwald and Holtz said they could conceivably make the Leap Motion even smaller (the size of a large coin) but they think this size/weight is best for a $70 device that you don’t want to lose easily.

If someone can watch the demonstration of Leap Motion and not feel their jaw dropping they probably don’t understand how extraordinary this technology really is. That’s okay, I didn’t either at first. Buckwald and Holtz explained that the magic of Leap Motion isn’t the array of infrared sensors and IR LED lights that are contained inside the sleek form factor – those are all cheap components from China. The secret sauce is software, specifically the algorithms developed by Holtz that convert those IR signals into a well crafted 3D picture of what’s happening in front of the Leap in real time.

It’s hard to summarize those algorithms in any other way besides this: they are really, really good. The mid-range performance computer seen in our raw demo footage was spending just 5% processing power to run the Leap Motion software, yet it was tracking finger motions down to sub-millimeter precision and at speeds that excel anything I could achieve even after drinking a dozen Red Bulls.

The incredible thing is, as Holtz says, they are only “about 50%” of where they will eventually be when the Leap Motion launches this Winter. That’s not a comment on how much they have left to do to get ready, it’s a comment on how far they aim to exceed everyone’s expectations when they hit market. Time and again during our interview, Buckwald impressed upon me that Leap Motion was squarely focused on creating a base experience that was so well-crafted, so robust in execution, that it would be an unequivocal boon to users.

Part of that base-experience is the 3D motion tracking that you see in the demos. The unseen part of the equation is the Leap Motion API that accompanies the device. With that API, third party developers will have a very reliable, very flexible means of communicating with the Leap Motion. The company has already pursued dozens of high-end developers to create marketable apps to be released with the product’s launch. All size developers, however, are welcome to request a free SDK from the company, and Buckwald says he is ready to hand out “thousands” if not “tens of thousands” of developer devices to help create the initial wave of apps. The goal is to have a “Leap Motion Market” thru which developers can sell their wares, and which will generate revenue for Leap Motion by taking a small slice of the profits. Everyone seems likely to be invited to that market, from amateur freeware enthusiasts to hardcore app developers with premium services to offer.

What forms will those apps take? Some are easily suggested by the demos. The Leap Motion could work as a new means of navigating maps and other 3D digital spaces, it could be a fun game controller, it could even serve as a handwriting to text recorder. Holtz was originally inspired to develop the earlier versions of the device because there wasn’t a good way to handle 3D sculpting and modeling – there’s another great app idea. The greatest part of the 3D market space, however, is yet to be explored. That’s as it should be, because it will take the industry a while to really understand for what kind of applications this hyper-accurate short range 3D tracking is best suited.

When it first came out, the Kinect was marketed as a “revolution” in gaming, with full-body tracking and “intuitive controls” for the Xbox 360. But that turned out to be silly. Game developers have yet to produce a Kinect title that I really enjoy, and the coolest applications I’ve seen for the device are as cheap long-range sensors for mobile robots, and one-to-one controls for humanoid machines. Buckwald and Holtz don’t want to create a similar device, or user market, wherein costumers buy the product, play with it for a few days, and then put it down because it’s not practical. The grand challenge is to make the Leap Motion an integral part of how you use your computer.

kinect sports
This was the promise of the Kinect – 3D tracking would transform your whole body into a game controller. In actuality the experience is kind of meh.
kinect robot
The real innovations provided by the Kinect could be cheap 3D sensing for robots. Could the Leap Motion ultimately make a similar turn in its development?

So then, what’s the real application space for the Leap Motion? I’m happy to say I don’t know. It could allow for a computer control system like that seen with Minority Report, but most of us don’t need to pour through visualized 3D data. It could be a cheaper way to integrate touchscreen-like controls without the actual touching (no smudges!). Leap Motion could be all about high-speed gesture controls for those times when you need to quickly mute videos, swap between windows, or answer a chat. Hell, for all I know Leap Motion could be the next big thing in porn. It’s anybody’s guess.

Rest assured that Buckwald and Holtz are preparing for everything and laying some truly impressive groundwork for all future development. While the learning curve is severe for each demo I tried, there was little doubt that the Leap Motion knew exactly where I was and how I was moving my hand. The core technology (the tracking algorithms) is well beyond what we need, and they’re continuing to improve it. The API is open and free to develop, and they’re actively seeking to enrich that community. They have a small (~22) dedicated team working on making sure that come six months from now, when you unwrap your Leap and plug it in, you’ll use it as if it’s always been a part of how you control your computer.

Considering the fickleness of the gadget market, that’s a hefty dream. The Wii, the Kinect, and the Playstation Move have all tried to reinvent game controllers but seem to fail as often as they succeed. Touchscreens are moving to dominate the mobile market, but aren’t well suited to the desktop/laptop space. All the fancy optical gesture control schemes we’ve seen crop up in the last few years are impressive, but they’re not household names. At least not yet. Leap Motion wants to be more than any of these previous attempts at redefining human-computer interaction.

Perhaps that won’t happen at the desktop computer at all. Buckwald says they are in talks with many different potential OEM partners to see how modified IR sensor arrays could be created so that Leap-like tracking could be performed in many different spaces. Why not Leap controls in the car, on your mobile device, or in your kitchen appliances? Holtz had dozens of long-term concepts where the efficient and powerful Leap algorithms might be applied. One of the best was as the controls for a head mounted display (does Google Glasses need a partner?). Anywhere you can place some IR LEDs and sensors, Leap Motion can be there.

The bottom line is that Leap Motion has millions in funding ($15M to be exact), an amazing pre-market device, and tons of positive press. Their core technology is sound, their vision for a developer market is laudable, and they’re in a market ripe for innovation. This is as sure as things get in Silicon Valley. Buckwald says they’re on target to sell hundreds of thousands, if not millions, of Leap Motion devices in the next year. I believe that will happen. He and Holtz also think that the Leap version of 3D tracking and control could be everywhere in the future. I want to believe that too…but I’ll wait until Winter to say for sure.

Buckwald and Holtz were pretty tired when we spoke (ground-breaking input devices apparently take very long hours) but they let me record everything we talked about. I’ll leave you with the video of our conversation where they both discuss the Leap Motion in greater detail.

[images: Leap Motion, Kinect/Microsoft, ROS]

[video: Leap Motion, Aaron Saenz/Singularity Hub]

[source: Michael Buckwald and David Holtz / Leap Motion]

]]>
https://singularityhub.com/2012/06/13/founders-of-leap-motion-our-amazing-3d-tracking-will-be-everywhere/feed/ 0 48891
Screwed by ZionEyez? Vergence Labs Will Give You A Pair of Their Video Glasses for Free! https://singularityhub.com/2012/06/04/screwed-by-zioneyez-vergence-labs-will-give-you-a-pair-of-their-video-glasses-for-free/ https://singularityhub.com/2012/06/04/screwed-by-zioneyez-vergence-labs-will-give-you-a-pair-of-their-video-glasses-for-free/#respond Mon, 04 Jun 2012 19:17:31 +0000 http://shhome.wpengine.com/?p=48631 ***As always: the opinions expressed in this article are those of the writer, and may not reflect the opinions of Singularity Hub, its advertisers, or its owners, and have not been reviewed or endorsed by any of the businesses mentioned in this article. In other words: if you wanna grief, grief Aaron. He likes it.***

Vergence Labs wins

***Update June 6: In a truly unexpected move, Kickstarter has suspended the Vergence Labs project. All donors have had their pledges returned, and Vergence won’t be getting any of the money. While Kickstarter has yet to comment on the exact reasons behind the suspension, Vergence Labs is staying positive and moving forward with a new crowd-sourcing drive on IndieGogo. There are still 45 days left in the new IndieGogo project, so hopefully Vergence Labs can recoup their losses and then some.***

ZionEyez dropped the ball, Vergence Labs wants to pick it up. Anyone who pledged money to ZionEyez to get a pair of their yet-to-actually-be-delivered video camera glasses, can now recoup their losses by joining the Vergence Labs Kickstarter. A completely unrelated company, Vergence Labs also has a video glasses device they are prepping for launch, and they want the world to know they are the group who can deliver. When you pledge to Vergence to get a pair of their video glasses, they will give you a bonus pair if you also pledged to ZionEyez. In other words: Vergence Labs is covering for ZionEyez’s failure…because they are just that awesome.

If you’re interested, act now, because there is less than 58 hours left in Vergence Labs’ Kickstarter project. Here’s a look at what their video glasses prototype can do, and what the retail model should do:

Last year ZionEyez conquered Kickstarter by raising an impressive $343,000 to create a pair of wearable glasses with an embedded video camera. With an engineering pedigree that included work at Flip, ZionEyez’s leadership promised they could deliver those video glasses by the winter holidays. Well, it’s been nearly a year later and ZionEyez is still discussing how it needs to expand its team and rework its designs. Most of us who pledged money to ZionEyez in hope of owning a pair of camera glasses are now writing off our donations as a complete loss. It’s a sad reality that not every Kickstarter project will be able to honor its commitments.

Unless, of course, another company comes along to augment that reality, and that’s exactly what Vergence Labs has done. Incubating at Stanford’s prestigious StartX program, Vergence Labs is a very small, very dedicated company whose first product just so happens to be a pair of video glasses that share footage through social media. The video glasses from Vergence Labs are aimed at doing everything that the glasses from ZionEyez were supposed to do, but more. Vergence Labs has “electronic shades” on their device, and even have their own dedicated upload site, YouGen.TV, for all the videos you’ll capture with your camera glasses.

Singularity Hub had a chance to speak with Vergence Labs this past weekend during a StartX event, and we mentioned (as I’m sure many others did as well) that Kickstarter probably has a bad case of video-glasses-fatigue due to the disappointment surrounding ZionEyez. Well, just a few hours ago, Vergence Labs updated their Kickstarter project with their new offer to all past ZionEyez donors. If you pledged to ZionEyez, and you now pledge to Vergence Labs to get a pair of their glasses, they will give you a bonus pair. It’s like the new company is making recompense for the failures of the old, but the crazy thing is, these two businesses aren’t related in any way. Vergence Labs is doing this simply because it helps those who want video glasses to get a pair, and because it’s the right thing to do for the community.

That earns my admiration.

But I do have to warn everyone that as much as I love Vergence Labs, there’s still no guarantees on Kickstarter. Despite some appearances to the contrary, it’s legally a donation, not a pre-order. $200 for a pair of video glasses is relatively cheap, especially with a bonus pair thrown in, but it’s not something to do lightly. Vergence Labs does have a working prototype, and they have shared video footage from it, but ZionEyez did that as well.

I can say that I’ve met Vergence Labs, I’ve talked with Vergence Labs, and I think these guys are the real deal. As we mentioned before, video glasses are just the first step in a much bigger vision for these guys. They want to redefine the human-machine paradigm through augmented reality. Erick Miller and Jon Rodriguez are going to make big waves in the tech world – maybe even with this first product.

If you’ve lost money to ZionEyez, but still want a pair of kickass video glasses, check out the Vergence Labs Kickstarter page. Their offer is only going to be good until the project closes, and that’s in less than two and half days, so you better act soon.

Maybe it’s time to double down?


[sources: Vergence Labs via Kickstarter]

]]>
https://singularityhub.com/2012/06/04/screwed-by-zioneyez-vergence-labs-will-give-you-a-pair-of-their-video-glasses-for-free/feed/ 0 48631
Submit to the Robots! …Or At Least To Their Film Festival. RFF 2012 Coming July 14th to NYC https://singularityhub.com/2012/05/22/submit-to-the-robots-or-at-least-to-their-film-festival-rff-2012-coming-july-14th-to-nyc/ https://singularityhub.com/2012/05/22/submit-to-the-robots-or-at-least-to-their-film-festival-rff-2012-coming-july-14th-to-nyc/#respond Tue, 22 May 2012 15:01:31 +0000 http://shhome.wpengine.com/?p=47809
Chorebot

**Update** The submission deadline for RFF 2012 has been extended from June 7th to June 15th!

It’s about time that the future of cinema reflected the future of the world. The second annual Robot Film Festival is gearing up to take NYC by storm on July 14th. While dedicated to showcasing films that feature robotic characters and themes, last year’s RFF included a huge range of movies. Documentaries on lunar explorers, tongue-in-cheek rap videos, heartfelt tales about robot affection – RFF 2011 proved to the world that robot-themed cinema has as much to offer as the film industry as a whole. The 2012 Robot Film Festival looks to be even better. Two headlining films have already been announced: I’m Here by Spike Jonze, and Robot & Frank starring Hollywood legend Frank Langella. And things are just getting started. Aspiring filmmakers can still submit their robot inspired works to RFF 2012 thru June 7th June 15th. Don’t miss the opportunity to be part of one of the most promising new film festivals in the world.

To get you excited for July 14th, Singularity Hub has gathered a collection of videos for you to watch below, including previews of the two headlining films, and many of last year’s winners. Enjoy!

First, here’s the trailer for Spike Jonze’s I’m Here – A Love Story in an Absolut World. All the eccentric excitement you’ve come to expect from a Jonze film, just with more robots:

The other headlining film, Robot & Frank by Jake Schreier, is a compelling story about the bonds that can develop between man and machine. Frank Langella plays an aging father facing a loss of memory, mental flexibility, and freedom. Is his new live-in robot helper a key to a better life, or a very shiny shackle? Here’s a clip:

Part of what made the inaugural Robot Film Festival so interesting was the variety of films that made it in. Moonrush by Jonathan Minard at Deepseed Media, is a documentary look at the work and vision of William “Red” Whittaker, a world-renowned roboticist with designs on getting automated explorers to the moon. In sharp contrast is the winner of Best Picture 2011: The Machine by Bent Image Lab. A morality play and creation myth wrapped up in the trappings of artificial life, The Machine is a compelling piece of animation. It’s amazing that you can have both documentary and campfire fiction play so well together in the same film festival. Both shorts are available to watch in full below:

Much of what works for the Robot Film Festival is the balance between films that tug on your emotions, and films that expand your mind. In the first category belongs my personal favorite, Chorebot by Greg Omelchuck, available to watch on Vimeo here. Also in the emotional category is Waiting for Name Assignment by Alvaro Gavan, which won an award for best human playing a robot:

In the mind expansion category I would include Absolut Machine: Absolut Quartet by Jon Lieberman and Don Paluska, which shows a group of delightful designed automated instruments. There’s also Operation daVinci by LCSR Robotics (winner of the Audience Award) which showcases the real world daVinci surgical robot:

Finally we have those videos which just sort of fry your mind a bit to watch. Saturn by iStave Creative is visually stunning with a dynamic audio accompaniment that really makes it feel like a trippy music video. It’s also not really safe to view at work – you can watch it here. I’ll end with a film that fries your mind in a completely different way: with brain-twitching groaning and meta-ironic smirking. Me and My Robots by Jay Kila is either the worst or best thing you’ll see from the 2011 Robot Film Festival. You can decide for yourself:

RFF 2012 promises to be even better than the opening year. The headliners are great, the production team is enthusiastic, and the buzz around the web is favorable. If you or someone you know has a robot-themed short film that they want to share with the world, submit now. The June 7th June 15th deadline is looming. For the rest of us, watching the 2012 Robot Film Festival should be excitement enough.  Not sure what you’re doing July 14th, but if I can be in NYC, I know how I’ll be spending my time.

[image credit: RFF 2011/2012]

[source: RFF]

]]>
https://singularityhub.com/2012/05/22/submit-to-the-robots-or-at-least-to-their-film-festival-rff-2012-coming-july-14th-to-nyc/feed/ 0 47809
Exclusive Interview with COO of Drchrono: iPads + Medicine = The Future https://singularityhub.com/2012/05/06/exclusive-interview-with-coo-of-drchrono-ipads-medicine-the-future/ https://singularityhub.com/2012/05/06/exclusive-interview-with-coo-of-drchrono-ipads-medicine-the-future/#respond Sun, 06 May 2012 15:14:00 +0000 http://shhome.wpengine.com/?p=46954
Drchrono allows doctors to store all patient data digitally on iPads and other mobile devices

Meet Drchrono, the free app for your mobile device that could revolutionize healthcare in the modern world. Founded by Daniel Kivatinos and Michael Nusimow, Drchrono is a software platform that eliminates the need for paperwork, because it tracks everything digitally. Doctor’s notes, test results, symptoms – they all go into your electronic medical record (EMR), and can be pulled up anywhere (with your permission) no matter where your doctor or you may be.  Available on most mobile platforms, including a doctor’s new best friend – the iPad, Drchrono lets you simply enter medical information on the same device you use to play Draw Something. Besides filling out EMR, Drchrono lets doctors check insurance eligibility, take dictation, or even snap a few photos. Patients use it to enter their own data, leave questions for their doctor, and (eventually) to track their health. To date Drchrono has been downloaded by medical professionals more than 20,000 times for the iPad alone. Singularity Hub spoke with Daniel Kivatinos as part of the Membership Program’s regular Hangouts with tech VIPs. Kivatinos discussed the history, scope, and future of Drchrono, and highlights from that conversation are available in the video below. Not only is Drchrono digitizing the modern doctor’s office, it’s also the platform that could drastically improve healthcare by handling all the boring, administrative tasks automatically. Allowing you and your doctor to focus on what’s really important: getting you healthy again.

The following video is a highlight reel of the Singularity Hub Membership Program’s Hangout with Daniel Kivatinos. I’ve also included a brief demo video at the beginning to give newcomers a better idea of what Drchrono does and looks like. Members, of course, can watch the full video. (Not a bad reason to join, huh?)

As Kivatinos explains in the video, Drchrono’s whole aim is to change healthcare. Traditionally, doctors and nurse practitioners record patient information on paper files – files they have to store, send for, and modify by hand. With Drchrono, both medical staff and patients can input information digitally, and share that data quickly as needed. No matter where a patient travels their digital medical records can be updated efficiently – holding all the test results, doctor’s notes, and patient questions that it needs to be complete. The core features of the Drchrono platform, which are available for free, fulfill all the requirements for US doctors to earn Medicare incentives for adopting EMR in their practice. That’s why switching from paper files to Drchrono can earn doctors up to $44,000 from the government!

The promise (and financial allure) of Electronic Medical Records is great, but Drchrono goes well beyond simply enabling EMR. Kivatinos wants to make access to patient records not only efficient but natural. Patients can be handed an iPad, take their own picture for records, and type in questions they want to ask their doctor. When doctors call up the file, all that information is right there in there hands. Doctors can add in notes by hand, or just by talking using medical speech to text (a pay feature for Drchrono). Before a procedure is prescribed, doctors can quickly check insurance eligibility through Drchrono in real time, and perhaps adjust treatment accordingly.

iPad App Patient List 1

In the near future, Drchrono will become even more effortless to use. Kivatinos says that one of the major upcoming developments will be geolocation and geofencing. As a doctor carries an iPad from the hospital where she works into a clinic where she serves part-time, future versions of Drchrono will be able to mark the change and start to pull up the files of scheduled patients in the new location. There won’t be any hunting through screens, or tapping through options, the right files will simply show up automatically as a doctor moves. As Kivatinos put it, “we’re trying to make physician’s lives completely easy…the more the tablet does, the less the doctor has to do.”

The tablets are doing more everyday. Engineers in the Drchrono team receive input from hundreds of doctors and continually update their platform to meet new demands and improve the product’s use. That’s not the model for a medical company, that’s the model for a tech company, and it’s how Drchrono will continue rapidly evolving to keep up with the cutting edge of medical care.

Already Kivatinos and his cofounder Michael Nusimow have the company expanding aggressively. A graduate of Ycombinator’s startup incubator, Drchrono raised $4.1 million in seed funding, and is actively looking for new personnel. Besides the constant improvements in their software, Drchrono will also look to increase its brand awareness, mostly from the ground up. The more doctors, nurse practitioners, and patients that Drchrono can attract, the closer they come to being adopted as the industry standard.

Being free helps. With tens of thousands of dollars made available by adopting Drchrono’s free EMR features, the company can easily get its foot in the door. Then, monthly subscription fees for advanced features (speech to text, insurance eligibility, etc) offer value-adds while simultaneously generating great revenue. It’s a win-win situation for doctors and Drchrono.

Patients have something to gain as well. With the rise of 24-hour medical monitors, online advocacy communities, and maps of environmental hazards, patients can collect more data than ever before on factors that may affect their health. Already Drchrono is able to take some patient input, and eventually all such patient data could be folded in. Not only will you be able to learn more about yourself, Drchrono will put that knowledge directly into the hands of the people who treat you.

Occasionally Singularity Hub reviews a technology that is exciting because it is flashy, or because it represents some incredibly advanced concepts that may arise in the future. Drchrono is exciting for a different reason: it makes so much g*dd*mned sense. The adoption of EMR is a no-brainer, but Drchrono is so brilliant because it keeps adding in all the other great features needed for truly digital healthcare. As Kivatinos explained in the video, Drchrono started just as scheduling software and continually expanded to include more and more desired features until doctors found it indispensable. It’s going to keep expanding, adding in geofencing, data collection, and more. Drchrono, or a platform very much like it, could eventually incorporate or serve as the data hub for all the great mobile medical technologies we discuss (medical advice from AI, cybernetic implants, genetic tests, etc). In other words, Drchrono is an idea that seems smart now and is only going to get smarter and smarter as healthcare evolves in the 21st Century.

[image credits: Daniel Kivatinos/Drchrono]
[video: Singularity Hub with some content provided by Drchrono]
[source: Daniel Kivatinos/Drchrono]

]]>
https://singularityhub.com/2012/05/06/exclusive-interview-with-coo-of-drchrono-ipads-medicine-the-future/feed/ 0 46954