In his late thirties, Marek Rosa is a CEO of two companies. The first one is considered to be just a step to the second one. Developing general AI has been Marek‘s dream since his youth, but he was aware that it can be a costly business. If you do not want investors to steer your decisions, the best way is to fund it yourself. Thus, Keen Software House, a game development studio, was a mean to make money for funding GoodAI‘s research. Yet, the general AI is only another step for him to help the humankind and understand the universe.
As far as I know, you have a screen in your office with a timer. It shows how much time there is left to reach your 10-year goal. Is it a rumor or could I find it there?
Last summer, we set up the timer as I realized we needed a specific deadline. Otherwise, it’s like “We still have ten years, we still have ten years.” It’s better to feel the pain of a tight schedule. At the same time, I don’t really know how long it is going to take. I’m just using a tool to push more pressure on myself.
As your LinkedIn profile reads, „one of the strongest forces in the universe is the need to create.“ What drives you to create things? What’s your mechanism?
That’s an intrinsic motivation to create stuff. There doesn’t need to be anything behind it. I enjoy what I’m doing, but along the way I must also reach certain goals and push things forward. There are some goals that are far in the future for me, such as general AI. So I must live many years without reaching the goal. On the other side, smaller goals come along, and I can reach them every month or every year. It’s a step by step process.
You launched GoodAI in January 2014 as a side project transforming it into a full time research and development. What was the first idea to dive deep into the field of AI?
I wanted to work on AI since I started programming, that‘s basically 20 years. The reason I wasn’t working on it was that I was developing games. My goal was to earn money, so I can start a company such as GoodAI and work on it full time. When I was young, I realized that if one day we create an intelligent machine, it could improve itself. Repeating the improvement process can get it to intelligence levels we can’t even imagine. Then you can use it to help you solve all the other problems. And since it’s a machine, it can be cloned, copied, scaled up.
What does intelligence mean to you?
For me, intelligence is the ultimate optimization of finding solutions to problems. Instead of me trying to tell a machine what to do, I’d rather first solve the intelligence of the machine. Then let it solve the problems. This kind of intelligence can later be scaled up. I, personally, can work at one moment only on one thing. If I would be able to clone myself million times, million little Mareks could be working on million different things:) I can’t do this right now, and I’m not satisfied with that.
Can you share the mission of GoodAI?
The main mission is to build general AI as fast as possible so that we can help humanity and understand the universe. We want to help people, explore and understand the universe around us, go to space, live forever. The tool we need to accomplish all of it is called general AI.
What are the basic steps you are taking to get there?
If I simplify the roadmap, I’d say we need to make an architecture that can accumulate skills in a gradual way. Then it could re-use them and improve itself. The second big step is a training environment that would teach this AI all the useful and safe skills. It will learn one thing after another, just like children at school. Those are the two main pillars of GoodAI.
Alongside our own R&D efforts, we decided to launch the General AI Challenge, a series of milestone challenges focused on solving general AI, with 5 mil USD in prizes provided by GoodAI. This way we want to incentivize the wider community of researchers and programmers to focus on the right problems.
There must be quite a lot of challenges on the way.
The biggest challenge is to develop this architecture that can learn and meet a set of other requirements. It should be able to learn to recognize some patterns, and then react to these patterns in a way that we want. Simultaneously, we are searching for ways to do this process more efficiently. I‘m sure it’s doable.
What about investments? Isn’t that one of the challenges too?
Not really, because I made money before. We should be covered for many years. I didn’t want to launch a general AI company and start to search for investors afterwards. If the potential investors would have different mission for the company, it could be dangerous to our main goal.
What is your take on people’s fear of AI taking over the world of humans?
We have actual plan in regards to the security of AI. There are many principles. One of them is that we are teaching the AI safe and useful skills. We‘ll teach it our values, morals, ethics.
What does it mean “your values”?
We’ll teach it that killing people or taking over the world is bad. We don’t want just to develop AI that would be smarter than people. We also want to use it to find a technology that would connect human brain and the AI. If you have a question or you need to solve a problem, you can offload it to the AI. You won’t think about the difference between your artificial and biological intelligence. You’d merge them together. It won’t be just a wearable, but an essential part of your body. If we get into this stage, people will be smart, not the AI. Thus, people will be evolving.
What is your take on singularity?
It depends what we take as singularity. I can think of two perspectives.
It’s machines improving themselves so rapidly that every new iteration is improving the second generation much faster and much more efficiently than the previous one. Singularity is the moment when you as a human stop tracking this process. It’s out of your scope, it’s impossible for you to stop the process.
…and the other perspective?
If we go even further, think about the kind of scientific research the AI would need to do. It’s not only working on my own intelligence. It’s also about working on scientific progress, so I can use the discoveries to make my own intelligence efficient. There are several ways – using smart energy, acquiring energy in a more efficient way, using new resources and building new intelligent computers. Singularity is when this process of increasing efficiency and self-improving starts to go exponentially up.
From your personal standpoint, are you afraid of the future of AI?
I am. But I also think there is no other chance than to develop general AI because someone would do it anyway. And the best way to control the future is to create it. We are working on the ways how to make AI safe.
If you look at the Slovak and Czech startup environment, what do you see?
It’s hyped but in a normal, useful way. AI is also hyped these days but it’s for the good of the thing. More people will study the field, more money will be invested into research. It might not be the smartest way to do things but it’s a pragmatic approach.
What is surprising to me, yet, is that only a few of the entrepreneurs are chasing big dreams. I’m working on GoodAI and put my money into that, being it my life project. People doing such things on a big scale are rare. I don’t meet many people who have such “crazy” dreams, and that is very strange to me. It doesn’t make sense.
What are the skills and qualities you look for when hiring people?
First one is the character of the person. If they want to play with a team, they are friendly, open and transparent – that is a good start. The other thing is how good is the person at communication. How can they explain their thoughts and ideas, as well as understand our thoughts and values. If a person is not good at communication, from my point of view, he or she is not smart. And we need smart people. You always need to communicate when you want to live among people. It’s a very important skill.
Creativity is probably one of the skills, too.
Right. Is the person just researching stuff other people already invented and recombining them? Or do they want to invent things? They can, of course, take inspiration from what‘s out there. But it’s about a balance – how much of their own ideas do they want to put on top of it.
I’m looking for people who can reinvent things from scratch. They understand the principles, and they are not afraid to invent things. Many people in a research community feel scared of trying out new, sometimes a bit crazy ideas.
Based on that I assume you do encourage your colleagues to challenge your own ideas.
Exactly for that reason we are launching the AI Roadmap Institute. To challenge ourselves. Its goal is to collect, compare and study different roadmaps to general AI. If someone proposes changes, we discuss why this or that proposal should be better than something else. I encourage people to do that.
Is the membership in the Institute invitation-only?
It‘s open. One thing that AI field is missing is more push on the big picture. Many of the people in the field are trying to solve their individual things. They share their solutions and progress. But it’s very hard to know how this or that one solution can help you get closer to general AI. If you, for instance, make your image recognition better by 1%, it doesn’t matter if it works 90% or 95%. If you don’t know how to carry on to the human level, it’s better to stop and look for different start line. That‘s what the Institute is here for – to share these opinions and look for solutions generally, not only in a specific area.
Photos: Marek Rosa
Thanks for cooperation on the interview to František Borsík.