AI & Humanoids

High-tech renaissance man Brett Adcock on Figure's humanoid robots

High-tech renaissance man Brett Adcock on Figure's humanoid robots
Standing 5 ft 6 and weighing 132 lb, Figure's 01 humanoid robot laborer will carry up to 44 lb and work for up to five hours on a charge
Standing 5 ft 6 and weighing 132 lb, Figure's 01 humanoid robot laborer will carry up to 44 lb and work for up to five hours on a charge
View 7 Images
Standing 5 ft 6 and weighing 132 lb, Figure's 01 humanoid robot laborer will carry up to 44 lb and work for up to five hours on a charge
1/7
Standing 5 ft 6 and weighing 132 lb, Figure's 01 humanoid robot laborer will carry up to 44 lb and work for up to five hours on a charge
Figure's team has already built a functional alpha prototype, to be revealed soon
2/7
Figure's team has already built a functional alpha prototype, to be revealed soon
With a screen for a face, the Figure 01 looks like it'll be difficult to anthropomorphize
3/7
With a screen for a face, the Figure 01 looks like it'll be difficult to anthropomorphize
Once they're sophisticated enough, humanoid robots threaten to crash the value of human labor down near zero. Economies and societal structures had better be ready
4/7
Once they're sophisticated enough, humanoid robots threaten to crash the value of human labor down near zero. Economies and societal structures had better be ready
Figure's offices definitely have that startup feel going on
5/7
Figure's offices definitely have that startup feel going on
Figure has taken an aggressive approach to hiring, drawing in talent across the robotics industry, as well as from high-tech automotive and elsewhere
6/7
Figure has taken an aggressive approach to hiring, drawing in talent across the robotics industry, as well as from high-tech automotive and elsewhere
Figure's ambitious goal is to have humanoid robots doing useful, commercial work by 2024
7/7
Figure's ambitious goal is to have humanoid robots doing useful, commercial work by 2024
View gallery - 7 images

Over the last 10 years, Brett Adcock has gone from founding an online talent marketplace, to selling it for nine figures, to founding what's now the third-ranked eVTOL aircraft company, to going after one of the greatest challenges in technology: general-purpose humanoid robots. That's an extraordinary CV, and a meteoric high-risk career path.

The speed with which Archer Aviation hit the electric VTOL scene was extraordinary. We first wrote about the company in 2020 when it popped its head up out of stealth, having hired a bunch of top-level talent away from companies like Joby, Wisk and Airbus's Vahana program. Six months later, it had teamed up with Fiat Chrysler, a month after that it had inked a billion-dollar provisional order with United Airlines, and four months after that it had a full-scale two-seat prototype built.

The Maker prototype was off the ground by the end of 2021, and by the end of 2022 it was celebrating a full transition from vertical takeoff and hover into efficient wing-supported cruise mode. Earlier this month, the company showed off the first fully functional, flight-ready prototype of its Midnight five-seater – and told us it's already started making the "conforming prototype" that'll go through certification with the Federal Aviation Administration (FAA) and the European Union Aviation Safety Agency (EASA) to become a commercially-operational electric air taxi.

The first flight-ready Midnight prototype is complete, and ready to begin testing
The first flight-ready Midnight prototype is complete, and ready to begin testing

Hundreds of companies have lined up to get into the eVTOL space, but according to the AAM Reality Index, only two are close to getting these air taxis into service: Joby Aviation, founded in 2009, and Volocopter, founded in 2011.

Archer's aircraft isn't an outlier on the spec sheet, it's the sheer aggression, ambition and speed of the business that has set Archer apart. And yet we were surprised again in April to learn that Adcock was launching another venture simultaneously, in a field even more difficult than next-gen electric flying taxis: general-purpose humanoid robotics.

These robots promise to be unparalleled money printing machines when they're up and running, eventually doing more or less any manual job a human could. From ancient Egypt to early America, the world has seen time and again what's possible when you own your workers instead of hiring them. And while we don't yet know whether the promised avalanche of cheap, robotic labor will bring about a utopian world of plenty or a ravaged hellscape of inequality and human obsolescence, it's clear enough that whoever makes a successful humanoid robot will be putting themselves in a much nicer position than people who haven't.

With a screen for a face, the Figure 01 looks like it'll be difficult to anthropomorphize
With a screen for a face, the Figure 01 looks like it'll be difficult to anthropomorphize

Figure, like Archer, appears somewhat late to the game. The world's most advanced humanoid robot, Atlas from Boston Dynamics, is about 10 years old already, and has been dazzling the world for years with parkour, dance moves and all kinds of developing abilities. And among other more recent entrants to the field is the world's best-known high-tech renaissance man, a fellow who's found success in online payments, electric vehicles, spaceships, neural interfaces and many other fields.

Elon Musk has repeated many times that he believes Tesla's humanoid robot worker will make the company far more money than its cars. Tesla is putting a lot of resources into its robot program, and it's already blooded as a large-volume manufacturer pushing extreme technology through under the heightened scrutiny of the auto sector.

But once these humanoid robots start paying their way, by doing crappy manual jobs faster, cheaper and more reliably than humans, they'll sell faster than anyone can make them. There's room for plenty of companies in this sector, and with the pace of AI progress seemingly going asymptotic in 2023, the timing couldn't be better to get investment on board for a tilt at the robot game.

Still in his 30s, Adcock has the energy and appetite to attack the challenge of humanoid robotics with the kind of vigor he brought to next-gen aviation, hoping to move just as quickly. The company has already hired 50 people and built a functional alpha prototype, soon to be revealed, with a second in the works. Figure plans to hit the market with a commercially active humanoid robot product next year, with limited-volume production as early as 2025 – an Archeriffic timeline if ever we saw one.

On the eve of announcing a US$70 million Series A capital raise, Adcock made time to catch up with us over a video call to talk about the Figure project, and the challenges ahead. What follows is an edited transcript.

Figure's offices definitely have that startup feel going on
Figure's offices definitely have that startup feel going on

Loz: Between Archer and Figure, you're doing some pretty interesting stuff, mate!

Brett Adcock: We're trying, man! Trying to make it happen. So far, so good. The last 12 months have been incredible.

How has Archer prepared you for for what you're going into now with Figure?

Archer was a really tough one, because it was a problem that people felt couldn't be solved. You know, battery energy density is not available to make this work, nobody's done it before commercially. We're kind of in a very similar spot.

You know, we had a lot of R&D in the space. There were a lot of groups out there flying aircraft and doing research, things like that, but nobody was really taking a commercial approach to it. And I think in many ways here, it feels quite similar.

You have like these great brands out there, like Boston Dynamics and IHMC, doing great work in robotics. And I think there's a real need for commercial group that has a really good team, really well funded, bringing a robot into commercial opportunities as fast as possible.

Archer was like: raise a lot of capital, do great engineering work, bring in the right partners, build a great team, move extremely fast – all the same disciplines that you really need in a really healthy commercial group. I think we're there with Archer, and now trying to replicate a great business here at Figure.

But yeah, it was really fun. Five years ago, everybody's like, Yeah, this is impossible. And now it's same thing. It's like, 'humanoids? It's just too complex. Why would you do that, versus making a specialty robot?' I'm getting the same feeling. It feels like deja vu.

Yeah, the eVTOL thing feels like it's really on the verge of happening now, Just a few hard, boring years away from mass adoption. But this humanoid robot business, I don't know. It just seems so so much further away, conceptually to me.

I think it's the opposite. The eVTOL stuff has to go through the FAA and EASA approval. I wake up every day with Figure not understanding why this wasn't done two years ago. Why don't we see robots – humanoid robots – in places like Amazon. Why not? Why aren't they in the warehouses or whatever? Not next to customers, but indoors, why aren't they doing real work? What's the limiting factor? What are the things that are not ready, or can't be done, before that can happen?

I wake up every day with Figure not understanding why this wasn't done two years ago.

Right. So, part of that must come down to the ethos, I guess, of Boston Dynamics. The idea that it's research, research, research, and they don't want to get drawn into making products.

Only five years ago, Boston Dynamics said 'we're not going to do commercial work.' 10 years ago, they said, 'Atlas is an R&D project.' It's still an R&D project. So they've put up a flag from day one saying 'we're not going to be the guys to do this.'

Which is pretty remarkable, really.

It's great, they've done a lot of research. This has happened in every space. It happened with AC Propulsion and Tesla and with Kitty Hawk in the eVTOL space... These were decade-long research programs, and it's great. They're moving the industry forward. They've shown us what's possible. Ten years ago humanoids were falling down. Now, Atlas is doing front flips, and doing them really well.

They've helped pave the way for commercial groups to step in and make this work. And they're great, Boston Dynamics is probably the best engineering team in robotics in the world, they're unbelievable.

Well, I guess you've assembled a pretty pretty crack team yourself to take a swing at this. Can you just quickly speak to the talent that you've brought on board?

Yeah, we're 50 people today, the team is separated into mechanical – which is all of our hardware, so it's actuators, batteries, kinematics, the base of the robot hardware you need. Then there's what we call HMS, Humanoid Management Systems, that's basically electrical engineering and platform software. We have a team doing software controls, we've got a team doing integration and testing, and we have a team doing AI. At a high level, those are the areas that we have in the company, and we have a whole business team.

I would say they're obviously the best team ever assembled, to be confident! You know, Michael Rose on controls spent 10 years at Boston Dynamics. Our battery lead was the battery lead for the Tesla Model S plaid. Our motor team built the drive unit for Lucid Motors. Our perception lead was ex-Cruise perception. Our SLAM lead is ex- Amazon. Our manipulation group is ex-Google Robotics. Across the board, the team is super slick. I spent a long time building it. I think the best asset we have today is the team. It's quite an honor to wake up every day working alongside everybody. It's really great.

Figure has taken an aggressive approach to hiring, drawing in talent across the robotics industry, as well as from high-tech automotive and elsewhere
Figure has taken an aggressive approach to hiring, drawing in talent across the robotics industry, as well as from high-tech automotive and elsewhere

Awesome. So the Alpha prototype, you've got that built? What state's it in? What can it do?

Yeah, it's fully built. We haven't announced what it's done yet. But we will soon. In the next 30-60 days we'll give a glimpse of what that looks like. But yeah, it's fully built, it's moving. And that's gone extremely well. We're now working on our next generation, that'll be out later in the summer. Like in Q3 probably.

That's quite a pace.

Yeah, we're really moving fast. I think it's what you're going to see from us. It's like what you see from a lot of successful commercial groups, we're going to move really fast.

Yeah, Tesla comes to mind obviously. They're building all their own actuators and motors and all that sort of thing. Which way are you guys going with that stuff?

We're investing a lot in the actuation side, that's what I'll say. And I think it's important, there's not really good off-the-shelf actuators available. There's really not any good control software, there's no good middleware, there's no good actuators. Autonomy can be stitched together, but there's really no good autonomy data engine you can just go buy and bring over. Hands maybe, there's some good work in prosthetics, but they're really not at a grade where they're good enough to put on the robot and scale it.

I think we look at everything and say OK, let's say we're at 10,000 units a year volumes in manufacturing. What does that state look like? And yeah, there's no good off-the-shelf alternatives in those areas to get there. I think there's some things where you can do off-the-shelf, like using ROS 2 and that kind of thing in the early days. But I think at some point you really cross the line where you've kinda got to do it yourself.

You want to get to market to by 2024. That's... pretty close. So I guess you've got to identify the early tasks that these robots will be able to shine in. What kind of criteria will decide what's a promising first task?

Yeah, our schedules are pretty ambitious. Over the next 12 months in our lab we'll get the robot working, and then over the next 24 months we'll ideally be able to step in the first footprints of what a pilot would look like, an early commercial opportunity. That would probably be very low volumes, just to set expectations.

And we would want the robot to demonstrate that it's actually useful and doing real work. It can't be 1/50th the speed of humans, it can't mess up all the time. Performance wise, it's got to do extremely well. We would hope that would be with a few of the partners that we're gonna announce in the next 12-18 months.

We would want the robot to demonstrate that it's actually useful and doing real work. It can't be 1/50th the speed of humans, it can't mess up all the time.

We hope those would be easier applications indoors, not next to customers, and it'd be able to demonstrate that the robot can be built to be useful. At the very highest level, the world hasn't seen a useful humanoid built yet, or watch one do real work, like, go into a real commercial setting where somebody is willing to pay for it to do something. We're designing towards that. We hope we can demonstrate that as fast as we can; it could be next year, could be the year after, but we really want to get there as fast as possible.

Do you have any guesses about what those first applications might be?

Yeah, we're spending a lot of time in the warehouse right now. Supply chain. And to be really fair, we want to look at areas where there's labor shortages, where we can be helpful, and also things that are tractable for the engineering, that the robot can do. We don't want to set ourselves up for failure. We don't want to go into something super complex for the sake of it, and not be able to deliver.

We also don't want to go into a very easy task that nobody has any interest in having a useful robot for. So it's really hard. We do have things in mind here. We haven't announced those yet. Everything's a little too early for us to do that. But these would be, you know... We think moving objects around the world is really important for humanoids and for humans alike. So we think there's an area of manipulation, an area of perception, and autonomy is really important. And then there'll be an interest in speed and reliability of the system, to hopefully build a useful robot.

So yeah, we're looking at tasks within say, warehousing, that there's a lot of demand for, that are tractable for the robot to do. The robot will do the easiest stuff that it can do first, and then over time, it will get more complex. I think it's very similar to what you're seeing in self-driving cars. We're seeing highway driving start first, which is much easier than city driving. My Tesla does really well on the highway. It doesn't drive well in the city.

So we'll see humanoids in areas that are relatively constrained, I would say. Lower variability, indoors, not next to customers, things like that at first, and then as capabilities improve, you'll see humanoids basically branching out to hundreds and ultimately thousands of applications. And then at some chapter in the book, it'll go into the consumer household, but that'll come after the humanoids in the commercial workforce.

At some chapter in the book, it'll go into the consumer household, but that'll come after the humanoids in the commercial workforce.

Absolutely. It's interesting you bring up self driving, there's a crossover there. You've hired people from Cruise, and obviously Tesla's trying to make its robot work using its Full Self Driving computers and Autopilot software. Where does this stuff cross over, and where does it diverge between cars and robots?

I think what you've seen is that we have the ability to have algorithms and computation to perceive the world, understand where we're at in it, and understand what things are. And to do that in real time, like human speeds. Ten years ago, that wasn't really possible. Now you have cars driving very fast on the highway, building basic 3D maps in real time and then predicting where things are moving. And on the perception side, they're doing that at 50 hertz.

So we're in need of a way to autonomously control a fleet of robots, and to leverage advances in perception and planning in these early behaviors. We're thankful there's a whole industry spawning, that's doing these things extremely well. And those same type of solutions that have worked for self-driving cars will work here in humanoid robotics.

The good news is we're operating at very different speeds and very different safety cases. So it's almost looking more possible for us to use a lot of this work in robotics for humanoids moving at one or two meters per second.

Once they're sophisticated enough, humanoid robots threaten to crash the value of human labor down near zero. Economies and societal structures had better be ready
Once they're sophisticated enough, humanoid robots threaten to crash the value of human labor down near zero. Economies and societal structures had better be ready

Fair enough. How are you going to train these things? There seem to be a few different approaches, like virtualization, and then the Sanctuary guys up in Canada are doing a telepresence kind of thing where you remotely operate the robot using its own perception to teach it how to grab things and whatnot. What sort of approach are you guys taking?

Yeah, we have a combination of reinforcement learning and imitation learning driving our manipulation roadmap. And similar to what you said with the telepresence, they're probably using some form of behavior cloning, or imitation learning, as a core to what you're doing. We're doing that work in-house right now in our lab. And then we are building an AI data engine that will be operating on the robot as it's doing real tasks.

It's similar to what they do in self-driving cars, they're driving around collecting data and then using that data to imitate and train their neural nets. Very similar here – you need a way to bootstrap your way of like going into market. We're not a big fan of physically telepresencing the robot into real operations. We think it's really tough to scale.

So we want to put robots out in warehousing, and train a whole fleet of robots how to do warehousing better, and when you're working in a warehouse, you're doing a bunch of things that you would do in other applications, you're picking things up, manipulating them, putting them down... You basically want to build a fleet of useful robots, and use the data coming off of them to build an AI data engine, to train a larger fleet of robots.

Then it becomes a hive mind-type learning system where they all train each other.

Yeah. You need the data from the market. That's why the self-driving cars are driving around collecting data all the time; they need that real-world data. So tele-operation is one way you can bootstrap it there. But it's certainly not the way you want to do it long term. You basically need to bootstrap your robots in the market somehow. And we have a combination of reinforcement learning and imitation learning that we're using here. And then you want to basically build a fleet of robots collecting sensor data and position states for the robots, things like that. And you want to use that to train your policies over time.

You basically need to bootstrap your robots in the market somehow.

That makes sense. It just seems to me that the first few use cases will be a mind-boggling challenge.

You've got to choose that wisely, right. You got to make sure that the first use case is the right one. It's really important to manage that well and get that right. And so we're spending a tremendous amount of time here internally, making sure that we just nail the first applications. And it's hard, right, because the robots are at the bleeding edge of possible. It's not like 'oh, they'll do anything.' It's like, 'hopefully it'll do the first thing really well.' I think it will, but you know, it's got to work. It's what I've built the company on.

So in the last six months, AI has had a massive public debut with ChatGPT and these other language models. Where does that intersect with what you guys are doing?

One thing that's really clear is that we need robots to basically be able to understand real-world context. We need to be able to talk to robots, have them understand what that means, and understand what to do. That's a big deal.

In most warehouse robots, you can basically do, like, behavior trees or state machines. You can basically say, like, if this happens, do this. But out in the real world it's like, there's billions or trillions of those types of possibilities when you're talking to humans and interacting with the environment. Go park on this curb, go pick up the apple... It's like, which apple? What curb? So how do you really understand, semantically, all the world's information? How do you really understand what you should be doing all the time for robots?

We believe here that it's probably not needed in first applications, meaning you don't need a robot to understand all the world's information to do warehouse work and manufacturing work and retail work. We think it's relatively straightforward. Meaning, you have warehouse robots already in warehouses doing stuff today. They're like Roombas on wheels moving around, and they're not AI-powered.

But we do need that in your home, and interacting with humans long term. All that semantic understanding, and high-level behaviors and basically how we get instructions on what to do? That'll come from vision plus large language models, combined with sensory data from the robot. We're gonna bridge all that semantic understanding the world mostly through language.

There's been some great work coming out of Google Brain on this – now Google DeepMind. This whole generative AI thing that's going on, this wave? It's my belief now that we'll get robots out of industrial areas and into the home through vision and language models.

It's my belief now that we'll get robots out of industrial areas and into the home through vision and language models.

Multimodal stuff is already pretty impressive in terms of understanding real world context.

Look at PaLM-SayCan at Google, and also their work with PaLM-E. Those are the best examples, they're using vision plus large language models, to understand what the hell somebody's saying and work out what to do. It's just unbelievable.

It is pretty incredible what these language models have almost unexpectedly thrown out.

They've got this emergent property that's going to be extremely helpful for robotics.

Yes, absolutely. But it's not something you guys are implementing in the shorter term?

We're gonna dual-path all that work. We're trying to think about how do we build the right platform – it's probably a platform business – that can scale to almost any physical thing that a human does in the world. At the same time, getting things right in the beginning; you know, getting to the market, making sure it works.

It's really tough, right? If we go to market and it doesn't work, we're dead. If we go to market and it works, but it's just this warehouse robot and it can't scale anywhere, it just does warehouse stuff? It's gonna be super expensive. It's gonna be low volumes. This is a real juggling act here, that we have to do really well. We've got to basically build a robot with a lot of costs in it, that can be amortized over many tasks over time.

And it's just a very hard thing to pull off. We're going to try to do it here. And then over time, we're going to work on these things that we mentioned here. We'll be working on those over the next year or two, we'll be starting those processes. We won't have matured those, but we'll have demonstrated that we'll be deploying those and the robot will be testing them, things like that. So I would say we have a very strong focus on AI, we think in the limit this is basically an AI business.

Figure's team has already built a functional alpha prototype, to be revealed soon
Figure's team has already built a functional alpha prototype, to be revealed soon

Yeah, the hardware is super cool, but at the end of the day it's like 'whose robot does the thing?' That's the one that gets out there first. Other than Atlas, which is extraordinary and lots of fun, which other humanoids have inspired what you guys are doing?

Yeah, I really like the work coming out of Tesla. I think it's been great. Our CTO came from IHMC, the Institute for Human Machine Cognition. They've done a lot of great work. I would say those come to mind. There's obviously been a large heritage of humanoid robotics over the last 20 years that have really inspired me. I think it's about a whole class of folks working on robotics. It's hard to name a few but like there's been a lot of great work. Toyota's done great work. Honda's done great work. So there's been some really good work in the last 20 years.

Little ASIMO! Way back when I started this job, I vaguely remember they were trying to build a thought-control system for ASIMO. We've come a ways! So you've just announced a $70 million raise, congratulations. That sounds like a good start. How far will it get you?

That'll get us into 2025. So we're gonna use that for basically four things. One is continued investment into the prototype development, the robots. We're working on our second generation version now. It'll help us with manufacturing and bringing more things in-house to help with that. It'll help us build our AI data engine. And then it'll help us on commercialization and going to market. So those are kind of the four big areas that we're spending money on with the capital we're taking on this week.

We thank Brett Adcock and Figure's VP of Growth Lee Randaccio for their time and assistance on this article, and look forward to watching things progress in this wildly innovative and enormously significant field.

Source: Figure.ai

View gallery - 7 images
11 comments
11 comments
reader
I hope they add precision movement and repeatability.
Bob Flint
Can you rent one today & have it work 24hrs for less than $360, or do we get three people working in three shifts for $15 per hour?
Daishi
Moravec's paradox - "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility"

As always my opinion is that humanoid robots are the wrong approach in part because the focus on them ignores other better approaches.
1stClassOPP
It will not be good if robots replace humans . Humanity’ needs to be needed to feel worth living.
When people feel useless, they’ll take to drugs, or find some way to make themselves important or worthwhile by joining gangs for bad stuff.
Pupp1
At the moment, I can only see a few viable robotic solutions for the home. The main one, is the lawn mower. But, they remain quite expensive for most people. The 2nd was the vacuum. But, that is one of the things I see quite regularly in the thrift store. They aren't very useful for a cluttered home. Though that has become better, with the vacuums being better at managing power cords and socks. But, both of these are rolling "turtle" robots, as has been around for many decades because they are so easy.

The next thing for the home, which may soon be viable, is the laundry robot. But, this doesn't need any new technology for actuators. Existing robot arms have been capable of the manipulations needed for several decades. Things like deciding how to wash a particular kind of clothing or fabric are easily handled with RFID tags, or bar codes. In fact, I think the task of picking up a piece of clothing, running it over an RFID reader, and sorting it would be fairly easy. I.e. pick it up, and if you detect two different RFID tags, lay it back down, and try again. If 3rd time doesn't work, drop it into the bin for letting a human do it. But, what has not been available, is software that can understand how to manage cloth well enough, to hang clothing up
Daishi
@Pupp1 Every article of clothing I have is already washed and dried by machine. Potentially there could be some use from a 3rd machine that could get stuff out of the dryer and fold it but why would that machine need 2 legs at the expense of thousands in costs, higher maintenance, and worse battery life? A company (FoldiMate) made a $1000 product to do this in 2019 but in 2021 the company folded.
michael_dowling
Hope one doesn't show up at my door looking for "Sarah Connor"..
Cymon Curcumin
Humans will always have a job—printing out forms from one super intelligent computer and typing the data in them into another super intelligent computer.
spyinthesky
“ From ancient Egypt to early America, the world has seen time and again what's possible when you own your workers instead of hiring them. ” well that’s an interesting overview of slavery. Somewhat worried that someone who can dismiss it other than in totally productive and economically positive terms so easily, is someone who is in charge of producing its next if artificial version. Seems to me the dystopian science fiction version of exploitation of technology and in particular Ai is ‘safe’ in his hands, which only adds to my, and it seems many experts in the field’s concerns that perhaps the future of humanity is far less safe than I had hoped.
Joe Boatman
I like this article and learnt how Brett Adcock has hired the best brains to solve all these new humanoid robot development problems in an area where there is a lack of off-the-shelf parts and software. And it's interesting that the job these robots will do is not yet determined! If I were a Brett, I would look at the already established exoskeleton market to learn how they are used, why they're used, when, and where. That would establish the initial market to steer the development of how items are found, lifted and placed. A better exoskeleton on the market would then help pay for the development of replacing the human - bit by bit (or bit by bot!) until the final version is a humanoid robot.
I would also turn upside down the idea that robots will only replace labour, leaving just the managers. I say replace the guys at the top - the CEO, directors, decision makers, accountants, human resources - with AI! No need for batteries and actuators here, just a smart computer capable of making better decisions. Think: no meetings, no expensive expenses, no cars... Just better decisions. 24/7.
Load More