New: Walk better or your money back! Try the Neural Sleeve risk-free. Get Started

Superpowering the Human Body: Episode 7

·

·

,
Dr. Phillip Alvelda

The seventh episode of CIONIC’s podcast Superpowering the Human Body, features a conversation with Dr. Phillip Alvelda, CEO/Chairman of Medio Labs and CEO of Brainworks. Dr. Alvelda is a technology and industry innovator as well as an educator. He was a Program Manager at DARPA’s Biological Technologies Office where he developed national scale R&D programs and technologies at the intersection of engineering and biology. He is also the founding Chairman of WiseTeachers, a non-profit helping K-12 schools extend STEM education with an emphasis on creativity and innovation.

Dr. Alvelda holds over 30 patents and patents-pending on a wide range of technologies, a technical Emmy Award, a Bachelor’s degree in Physics from Cornell University, and Masters and PhD degrees in Computer Science and Electrical Engineering from MIT.

Watch on YouTube

Listen on Soundcloud

The full transcript is below, edited for readability.

Jeremiah: Hi, my name is Jeremiah Robison, Founder and CEO of Cionic. We build bionic clothing that can analyze and augment human movement, enabling the body to move with more Independence and control. Welcome to another episode of our podcast. Superpowering the Human Body, where we explore the science and technology of human augmentation.

I’m so excited today to be joined by Dr. Phillip Alvelda, a technology and Industry innovator as well as an educator. Dr. Alvelda was a program manager at DARPA’s Biological Technologies Office, where you developed national scale R&D programs at the intersection of engineering and biology. He is also the Founding Chairman of WISE teachers, a nonprofit helping K through 12 schools extend STEM education with an emphasis on creativity and innovation, and he also co-founded MobiTV, which is one of the first live television experiences on the mobile phone.

He has so many highlights of his career, and I’m so excited to start talking with him. So, we’ll jump right in. Phillip, welcome.

Phillip: Thanks so much for having me. I really appreciate it.

Jeremiah: So as I said, your career has been fascinating contributions across science, technology, entertainment and you kind of started some of the really early work in neurotechnology and neural interfaces with DARPA. Can you tell us a little bit about the work you did there?

Phillip : Sure. I was recruited to come to the agency. Many people know DARPA for investing in the creation of semiconductors or microcomputers or the internet, and GPS technologies. But, I think what many people weren’t aware of, is that not too long ago, they decided that biology was becoming a technology in its own right — and we could begin to think about designing and engineering with it — and they had started a Biological Technologies Office. And for me, this was the planets and stars aligning. Because, my background started in AI and sensing, back in the 80s. I worked on the original Star Wars Strategic Defense Initiative plan — the intercept ballistic missiles — when, I think the computers we were launching into space on the satellites at that time had less processing power than your Fitbit does today! And yet we were able to get them to do some remarkable things.

And that’s when I became interested in AI in general. But ever since then, I’ve done little pieces of career, and physics and semiconductors, and displays. But always with an eye towards computing and artificial intelligence, and “what is the brain really doing in there?” And so they recruited me to DARPA, because I was one of these peculiar fellows that had skills across all the disciplines that they needed to build a real neural interface system. At that time, they had done some really remarkable early experiments where they had these tiny little arrays of wires, — imagine one hundred little wires that are a millimeter long — and they would just press them into the surface of the motor cortex, and they could use that on a paraplegic person to control a robot arm. Okay, that was amazing! That was a watershed moment where, for the first time you could imagine; All right, this is just technology. It’s just wet and sticky instead of hard and brittle. but operates in a completely different way that we don’t fully understand yet. But, we’re starting to peel out some of the engineering principles and we can make an artificial system that really does marry to the brain.

And so they recruited me because they had done kind of the technology demonstrations, but they’ve gotten stuck in some ways. You hear a lot about the desert of despair between an initial technology demonstration and then the commercialization and growth of an industry. So, with my entrepreneurial experience, with my experience in the vast different fields of engineering that all kind of touched it, they had hopes that I could come in and not just spawn a new technology development program, but bring it closer to industrial readiness. So, in a way, catalyze the brain machine interface industry by integrating the latest photonics with the latest electronics, with the latest gene engineering, with the latest neuroscience. All are things that, in different parts of my career, I had studied. So, I was a little bit of a unicorn for them, where they said, “You’re the guy”, but those just happened to be the things that I was interested in. But, that turned out to be a really amazing journey because you go to DARPA and for someone like me it’s like a candy store. They give you, for me, it was like a portfolio of 175 million dollars of research funds to manage. And then they said, just travel the world, find the brightest people in the most amazing laboratories: It can be companies, high-tech university labs, independent folks, whoever you think can make this happen; you know, figure out how to catalyze this new industry.

And, so that’s what I did. I spent the next two years really just looking around for what were the latest pieces in all the different technology areas. And of course, you spend two years talking to the smartest people on the planet and it’s just a delight.

Jeremiah: And some of it’s bound to rub off, right?

Phillip: (laughs) Yeah, you hope! For me, it was great. I’m very curious and I don’t sleep much. So it was, it was a great job in that respect. But it was really fascinating because, one of the things we discovered was that a lot of the different component technologies were really quite advanced. but none of the different disciplines had really communicated with each other. And so they weren’t aware of how advanced the other guys’ piece was. And so, it turns out a big role we ended up playing was kind of ecosystem builder. Where, having found all these people internationally, we would invite them to a series of workshops where the best photonics people that had the FAB and were building the parts, could sit next to the LED people, that could sit next to the CMOS people, who could sit next to the computational neuroscientist and sit next to the medical device manufacturer who could do the regulatory piece. And so when we got them all together, I was literally able to stand in front of the room and say “I’m going to tell you that you build a brain machine interface that will look at a million neurons and right to 100,000 in a reasonable DARPA program. And right now, I don’t think any of you are going to believe me, but I can tell you, that while you think you’ve got your piece in-hand, and the other stuff isn’t ready yet, I’m telling you, somebody is thinking the same thing in your direction.” And so it really was just introducing them and letting them form teams and just making them aware of how far the state-of-the-art had come and if they would but team and cooperate, they could do amazing things. And, really great things did ensue ultimately.

Jeremiah: That’s fantastic. We see the same thing where we have the wonderful opportunity to work with all these amazing academic labs that are building the future of neural care, and we are able to provide a mechanism whereby, they can collaborate, they can commercialize, they can productize and bring this stuff out of the lab — translational science — into the communities that do them. Really interesting to think about what you said in the multidisciplinary approach and all of the things that have to come together when you’re talking about biology to the metal, metal to the cloud, and back again. What’s one of the biggest insights you have for anybody who may be coming into neurotechnology, physiology technology, human augmentation now, that’s different from just working with machines alone?

Phillip : Well, I think there’s a couple of what I would say are helpful insights and one big cautionary one. The helpful insights are that this technology is real, it’s doable, and it has reached a point where it really can help thousands, if not tens of thousands, hundreds of thousands of people that have various ranges of deficits. Deafness, blindness, aphasia are just a few of the conditions that are being treated by the devices that we sponsored. And, now, there’s many others that have kind of grown up out of that ecosystem that we first put together.

Everything from Elon Musk’s Neuralink to Brian Johnson’s Colonel to the Facebook Mental Typewriter and a lot of the deep brain stimulators at GSK and Boston, Scientific and Medtronic, — all of those grew out of the ecosystem that we built. I’m not saying we sponsored and gave money to all of them, but we advanced each of the large 10-person teams enough that all these companies started poaching scientists like crazy, realizing they could really do this stuff. And so it really did take off, and that was really neat to see. But the advances are real now and the people we can help are really beginning to benefit.

There’s one program I was working on, that wasn’t even my program. I was kind of the assistant guy who thought like a physicist and computer scientist amidst all the biologists, who was called to support a program that was designed to endow the prosthetic arms with touch sensation. Well, when I got there, they had already gotten the control part. So, with your mind, you can move it around, but now the idea was, a large part of your dexterity is that you can feel the cup when you are trying to pick it up. And so the idea was, what of that touch sensation, which really is quite powerful when you think about it: pressure, temperature, texture, vibrations, all those things you sense without even really thinking about it. What are those codes and how is it related to the cortex? And I tell you, we made remarkable progress. The guy running the program, a guy named Doug Weber, really did an amazing job. And I tell you, the moments when we’re done with the human trials and the soldiers that had lost a limb, were onstage with the arm, not just showing how it works and that it is real, but talking about, for the first time, they felt like they had their arm back. And it was endowed with the touch sensation, it wasn’t just something they were controlling — it had become part of them. And, I’m just in tears, running down my cheeks, saying, oh my God, look at what we’ve done for these people. And it was just, one of the privileges of my life to just happen to have been at that moment where we can help make that happen.

Jeremiah: That’s awesome. And the cautionary tale?

Phillip : And the cautionary tale is that I think a lot of people that get into the field, don’t realize what the implications are of the FDA requirements to ‘do no harm’.

And what that means is that for now, a lot of the neurotechnologies are what I would call a long game…in that when you say I’m going to have a cranial implant to cure blindness, well, we built the first prototypes but do you know how long the approval cycle is for a cranial implant? It’s about 10 years, And so, you can do it, but you have to think on a 10-year timeline, not a two or three-year product development timeline.

And I think that you see a little bit of this from Colonel and from Neuralink as examples: Very ambitious, very tech-forward and then they just ran smack into the “Oh, you want to put this into humans, well, here are the FDA requirements.” I wouldn’t say it’s a roadblock, but just a set of procedures that are designed to protect people as you build these things. So I’d say, (A) yes doable, but (B) beware of your obligations to protect people in the process, and what that means for how you need to structure and time a company.

Jeremiah : I find it interesting, when you talk about that responsibility and I think it goes both ways. Which is to say, you have these interactions with these amazing people that you’re helping, and you see a concept go from just an idea to actually working in someone’s life, and the responsibility to do it with no harm and the time that takes. But also, that deep sense of obligation — now that I know it can be done, how do we get it to these people and how do we get it as fast as possible — given what you said in terms of regulatory timelines?

Phillip : Yeah, and for us, this was a little bit of a directional guidance, for what company did we want to start? And in fact, coming out of DARPA, I had imagined that I would help catalyze the brain machine interface industry, and then I’d go start a company in that space and start building all these neat, nifty things we can connect to the brain. But, you know, it turns out that the type of company I like to build are typically these bootstrap, or venture funded companies and they don’t have ten-year time horizons. They don’t have time to wait for the FDA regulatory. By and large, they’re shorter term bets. But it turns out that for Brainworks, the company we ended up starting, the Brainworks Foundry, once we were putting these instruments into the skulls of, you know, first mice, and then, sheep and then monkeys and then people, this is the first time we had instruments in someone’s head that could look at a million neurons at once and and see what they all were doing. And, really kind of parcel out what computation was actually happening in the brain. Not, the general operating theories which, you know, you can open up a neuroscience textbook and will give you these general descriptions. But, when you say, what was it Richard Feynman said: “You don’t really understand something until you can build it.” It’s like when you say, all right, suppose I have an MPEG video file and I want to stimulate the neurons in your cortex so that you perceive the MPEG video file. Well, that’s a completely different and additional layer of understanding that you need for the engineering transduction of MPEG bits to neural action potentials. And so, once we started developing some of those, we learned all sorts of things about how the brain worked that we didn’t realize before. And so, I left DARPA with just this keen awareness of how there are different pieces of the brain that could do different things, that there is no current embodiment in the world of AI today. And so, that’s what we started Brainworks to do, which was something that we could take out of the kind of discoveries that were unveiled and through the DARPA Investments and immediately apply them to build more powerful systems that go far beyond the typical backwards error propagation or deep learning architectures of today’s AI toolkits. And, and that’s how that’s how Brainworks began, was to do just that

Jeremiah : That’s awesome. I want to touch on that a little bit more, which is we talk about existing paradigms for deep learning, and a lot of them involve taking massive amounts of data, bringing them back to the cloud and processing it there, where you have big powerful computers. But in order to actually augment a human, you have to be doing that in fractions of a second…in real time…on the body…with no connection. You’ve spent a big part of your career translating things down into the edge. What’s the pathway for AI at the edge and how do we get there?

Phillip : Well, I think the good news is that the human body is a remarkable thing, in that it does kind of the training and the operation all in one big wet, messy tiny … inside your skull…kind of package, and it’s pretty damn energy-efficient to boot. And, it’s got a lot of operating principles that are very different than the way we drive computers, which are kind of crude by comparison. Slamming these transistors all the way on and all the way off and trying to do that faster as opposed to letting the physics of information propagation drive a computation. That, incidentally was the subject of my thesis for my Phd program: how can you use intrinsic physics of systems to do computation without spending energy? And I’ll say that, we are still at a pretty crude approximation of what the brain does. Every generation of brain simulation is a little bit closer and captures a little bit more of the power, but it is still trapped in the assumptions you make to simplify the model that you can run on today’s computers. And so, in a way we’re kind of victims of our own historical computer developments. We’re trapped within them. And I think that today, I think we benefit from the fact that while training is difficult, execution is relatively easy from a computational standpoint. And so I think that many of these earlier successes have come from training systems with your giant Cloud systems. And then, when you want to translate it to a portable device, you package it in a little bit of firmware like the Apple or Google, M1 or, successive bionic chips as they call it, that do the machine learning algorithms. But they don’t do the training there. They have done the training in the design and then they embed the efficient execution piece in custom hardware on the device. So, it’s kind of like taking what we can’t quite do in a portable device yet that your brain does and, and pulling it out and replicating in this big messy cloud, and then doing the other pieces in micro to perform a local task. But, it means that you can’t do some things that the human brain can do. Iit means that my bionic chip on my iPhone is never going to learn anything. We’ve got to learn things elsewhere, or the next generation of chips.

Jeremiah : Let’s get back to the business of neurotech. You had mentioned earlier; there are long time horizons, deep pockets. So, you either need Elon Musk’s personal wealth stash, or you end up like Control Labs, selling to the Meta, to fund this research. But you’re doing a start-up, and you’re doing a series of startups in the neurotechnology space. Tell me your model. Tell us how that works.

Phillip: I have a passion for starting small technology companies, and finding some piece of technology leverage and then, kind of growing the bubble of what’s possible. So, my favorite thing to do, of course, is to find an awesome group of people and imagine how you can change the world for the better. Now having done a few of these, I don’t need to do this to pay the mortgage anymore. But, we wanted to do one that really moved the needle and changed the human condition in a beneficial way. And so, when we started Brainworks, we realized we had this kind of new neurotech…AI tech…kind of toolkit, that you could apply to enhance pretty much anything that we’re operating. And we did a pretty intense survey; we spent about a year really looking at all the different technology and business areas and we built financial models, business-use cases and even some small demonstrations of the type of things that we could do in each of these areas. And I’ll tell you, every time we turned around, we came back to healthtech.

Because the numbers are so big, the demand is so pent. And, there are so many outdated practices that could benefit from the slightest bit of automation, not to mention, if you could really do something that would use these latest technologies. And so we focus Brainworks, and its first target actually, on the delivery of heart healthcare.

And, it really was an interesting journey because we realized that if you look at the biggest killers, the most people die unnecessarily from heart conditions. Usually, untreated and even unrecognized heart conditions, that they didn’t even know that they had. And it’s also the thing that costs the most money in health care.

And we ended up teaming up with Alan Taylor at MedStar. He’s also a Professor at Georgetown, who sits on the board of certification, for cardiologists, and writes the textbooks and so on. So, he’s the guy that more than anyone we could find kind of defines, how do you care for people with heart conditions? Right? And he loved what we wanted to do. We said, we’ve got these new AI tools, we think we can take the data that you have and take your expertise at a senior level and make it accessible to many others; and surface insights, and help with diagnostics and treatment planning and all these things. And so he said, that’s great. And so he shows up with this just a shit ton of data …WHOMP…and we were in hog heaven for a little while because you know, data is really the fuel that AI companies run on in many ways. And then we realized something. We realized all of that data really only applied after someone realized they had a crisis. So the data started after they’d had the heart attack.

We realized at that moment that the real challenge wasn’t servicing the insights, because they already had the heart attack, right? There’s how do you treat the heart attack and optimize, that’s great. But the real issue was how could we make getting your heart health so low cost and so low friction that you would just do it all the time. And we can predict when you could have the heart attack rather than treat you after you’ve had it. And so, the first product that we built was in cooperation with the MedStar group and with a group of Johns Hopkins.

We built a system that would replace the push carts in the hospitals where they wake you up every two hours with a camera that just does it all the time — records you automatically, it identifies your face if you’re a consenting patient. Starts logging your vitals to your health record, if you’re not it doesn’t store anything and so we don’t violate any, you know, HIPAA compliance and you know, acceptance regulations. And so that was going like gangbusters. And then the pandemic hit right in the middle. And all the hospitals kind of went on to a war footing, where they just were not in a position to take in any technology. And having been at MobiTV for the financial collapse, and even really 9/11, we realized this was going to be very similar. And that it was going to be a year or two before they could really take the new technology. So we had to do a hard pivot because no one cares about vital signs, when you need a covid test, what you really wanted was a covid test. But, the covid tests are still used today just egregiously expensive.

And that, ultimately, is where we headed. We began to apply these same AI techniques combined with these new kind of molecular sensing techniques that are coming out of the next-generation sequencers. And so we’ve got our lab up and running here in Almeida and we’re now taking bids for large volume contracts to support a whole broad range of schools and companies to do it. Knock on wood, in a couple weeks, we’ll also be launching an at-home test kit. So you can just go to the website, say send me a kit, and we’ll ship you something in a couple days. You just spit into the vial, and we’ll give you an answer within 24 hours.

Jeremiah : Wow, that’s super fantastic. And all that’s coming out of your Foundry.

Phillip :Yes. So we now have a new company Medio Labs that’s the first official spin out of Brainworks, and off we go.

Jeremiah : Now how many other companies are cooking up in Brainworks right now? .

Phillip : Oh, well, you know, we still have our kind of prioritized list. I think the model that really works for us is finding the right launch partner to go in on a business in a particular area. And so, we’re looking at things like energy management. So for distributed energy sources and loads, we’re looking at education and automating reading assessment. So you may know that in this pandemic, huge numbers of inner-city public schools in the US, they just shut down for like, 18 months. And for the kids that are Kindergarten through Third Grade, they are somewhere between one and three years behind on reading. And so there’s a huge problem of how do we catch em up? And so we’re looking at some solutions on how to do that.

Jeremiah : Amazing. Alright, well with your extensive background, across science and technology and productization, I thought we’d have a little fun and do a little segment I like to call “science / fiction”. I’ll ask you four questions and you tell me science or fiction. And feel free to elaborate and as much as you like. I’ll start with a layup because we already talked about it: Science or Fiction? 24/7 diagnostics that track your body’s every vital sign will usher in a new era of continuous personalized medicine.

Phillip :That is now operating fact. We have it.

Jeremiah : FACT. I love it.

Phillip : I would say its operating technology today.

Jeremiah : Okay. Well, I am excited for my AI doctor to join me on this journey.

Phillip : (laughingTechnically, speaking, we don’t have the authorization from the FDA to do diagnostics yet, but we can tell you what the vital signs are.

Jeremiah : I spent a long part of my career in non-invasive diagnostics and I am a big believer in that future myself. So I’m also going with science. Science or Fiction? You have already signed up to have Elon Musk’s Neuralink installed in your brain as soon as it becomes available.

Phillip : I will probably not be amongst the first to sign up (laughs) but I absolutely want my brain interface. And I think it’s probably a matter of maybe somewhere between six and ten years that you can get one.

Jeremiah : It’s amazing to think about the time horizon there, and there are so many technologies that I do think have hit that graveyard as you have said, because they don’t bring enough value for the invasiveness in your life. And when you talk about invasive, well lacing things through your brain, is pretty extreme. But when we talk about the populations that they’re starting with, similar to the populations we’re starting with at Cionic, people with a real unmet need that can gain function, mobility and Independence through this technology; they’re the proving ground. They’re the right people to start with.

Phillip : Yeah, and I will say that, as you do with these kind of inventions, especially the most invasive ones, you look for the people with the most desperate need. Or, in a way that, if you damage them inadvertently, that you’re not taking away something that they already have. So, we get just tremendous outreach and continual requests: “Can you help me? I’m blind, I’ll do anything.” And it’s particularly poignant for the people that had sight at one point and lost it through an accident or something and they want to recover that. But, I think of course that’s where you start with these technologies. And then, you know, people like you and me that just want to be able to read without having to pick up a book, we’re farther down the list, right?

Jeremiah : Alright, Science or Fiction? Ten years from now, more of our social interactions will happen in the metaverse than in the real verse.

Phillip : I think that was fact this last year with coronavirus. Just if you’re willing to extend Zoom, the metaverse status.

Jeremiah : It did kickstart us along that path, for sure. I’m anxious to get back to more of my interactions in person. But, I think there’s a real interesting pathway there, and I think especially when we think about how much Facebook did to connect people that didn’t see each other on a regular basis, it’s not that hard to imagine more and more social interactions happening there.

Phillip : Yeah, you know, I think, the metaverse is kind of an interesting conundrum for me because I’ve been looking at virtual reality since the early 80s of micro display, when making the virtual reality displays that ended up in the Google Glass and elsewhere. And, we used to joke that it was a zero billion dollar industry (laughing) and we were flogging these glasses and we could always find these vertical applications. But of course, what you really wanted was Snow Crash and the metaverse and everyone buying one of these things. You can see little glimmers of it today. I don’t know if you’ve tried it — if you haven’t tried it, you should try snap camera for your Zoom. These PC plugins where you can use all the Snapchat filters and some of them are really quite sophisticated. They’ll put helmets on your head, they’ll transform you into Darth Maul. So, you could have the beginnings of we’re at least sitting at a table in the meta verse and you can have an avatar embodying your face, if not standing around and fighting each other with virtual science fiction or fantasy weapons. But, if you want that, go play Call of Duty. We’re seeing it in bits and pieces, but I think we’re very close to there certainly in the games, not quite for the professional interactions yet.

Jeremiah : Okay, question four: Science or Fiction? The AI apocalypse is already upon us.

Phillip : (ha) Fiction. I think one of the intros to one of my talks was addressing that front and center. If you look at the people who actually work on AI, there’s a close correlation. The closer you work to the actual algorithms and what they can do and can’t do, the less you believe we’re at risk for an apocalypse, because you know how bad they are. I want to qualify that a little bit; we’ve gotten to the point that for very narrow, very well-defined problems, these artificial systems can now exceed human performance. But they’re bad in that they’re brittle. Like, if you change something or you make some variation, or you change an accent — like God help a poor, Scottish person trying to use a voice recognition system — there’s great YouTube videos of people trying to get out of an elevator. It’s those sorts of things where you realize we’re not at a huge risk. And I think that people who think that there’s going to be some moment in time, and all of a sudden the apocalypse happens, haven’t really been paying attention to how these technologies come about: they come about piecemeal. So in the Neuroscience world that you’re familiar with, we invent a little bit of technology and we can replace a little piece of the motor cortex, to control an arm. We do another little piece of technology and I can put in another thing to replace your eyes so that you can see. But the way your brain represents abstract thoughts and emotions it engages the entire surface of your brain. We’re nowhere near that and even if we could replace little bits by bits, it’s not going to sneak up on us. It’s going to happen piecemeal.

Jeremiah : Well, I for one welcome our robot overlords when they come.

Phillip : We’ll no doubt be more productive anyway.

Jeremiah : Well, thank you Phillip for joining me today. This was a stimulating and wide-ranging conversation. I very much enjoyed it. And thank you all for listening to Superpowering the Human Body. You can subscribe to our podcast on Soundcloud and YouTube, so you never miss an episode until next time. Thank you.

Phillip : Thanks Jeremiah.

Adaptive Clothing Adaptive Fashion ALD Bionic Clothing CIONIC Stories Exercise FES Gait Training NDEAM Neuroscience Neurotech Scientific Advisory Board Software Releases Stroke Rehabilitation VA Walking After Stroke

Subscribe

Sign up for the latest CIONIC news + updates