Originally, he wanted to program videogames, but eventually started to deal with 3D scanners. The idea of a robust, exact yet very fast 3D scanning led him, together with Jan Zizka and Michal Maly, to create a startup Photoneo. Recently, they received an investment of 2,1 mil EUR from Credo Ventures. Co-founder and CTO Tomas Kovacovsky.
What are the latest news from Photoneo? What are you currently working on?
We have finished our product – a 3D scanner composed of a camera, a projector unit and a Tegra processor that runs all the calculations so the user´s computer is not slowed down. This scanner is usually used in the industrial field, a typical application is so called “bin picking” when a robot picks random objects from a box. This might be an easy task for a man but for a computer and mainly a robot it´s quite difficult. Components on the most of the production lines have their exact millimeter place so the robot can only pick it up at its exact position. This is slowly changing now with so called Industry 4.0. Our scanner monitors the object´s 3D surface randomly, it records how the object´s model looks like and using a certain algorithm we search its positions even if we cannot see the whole object.
What´s the idea behind the 3D scanner?
It´s a device that reconstructs 3D surface. Our unique know-how is in creation of a camera that is able to scan in a high quality in real time. Based on this idea we founded our company and our technology is currently in the patent-pending state.
What is the most difficult part of this technology from an investment point of view?
To have a technology that works perfectly we need our own CMOS sensor similar to those in cellphones and that´s a very expensive technology. Its development is estimated to app. 1 mil EUR. We are in a kind of preparation phase for our main product now and once the new CMOS sensor is out, which should be at the end of the year, we´ll just change our old sensor with this new one and update the algorithms.
How would you describe your customers? Who usually approaches you?
Our main customers are integrators – people who integrate a specific technology to solve a particular problem. There are a lot of problems they can actually deal with so I will use a specific one I am aware of. A customer needs to monitor frozen chickens. The chickens go through a certain production process, at one place, they are stripped of feathers and then they are inspected and sometimes it can happen they lose a wing or a leg during the process. It´s kind of brute but this is a real problem. If such a chicken gets to an end customer, he usually claims for replacement and therefore they need to monitor such incomplete chickens beforehand. With the 3D reconstruction you can tell if the chicken is complete, if there isn´t any part missing.
What we mainly do is the core technology and our customers are the people who implement this technology.
What are your next plans with the product?
What we offer to the customers is the device as such but also, we try to create some kind of a library that can potentially do even more. For example, I describe this chocolate cookie and save it in the library so I can look for cookies at some other place later. With such a strategy we can get to the final application when for example the integrators don´t need to have special experience with the software but they only do the part that interests them. This approach gets us a chance to be even more beneficial for our customer. We try to identify different usages for our device that might be very general but at the same time, where our know-how can be applied.
Another thing we are currently working on is the contextual aspect. For example, if you have a toothbrush in your bathroom, it´s clear it´s a toothbrush used for brushing your teeth but if you store it in a closet it´s probably used for cleaning things. This is something you cannot guess from the object itself but you can predict its usage from the context. And that´s something a human can learn.
We are still thinking about how to develop the technology. My dream is that sometime in the future this technology will replace eyes of a robot. However, we can´t do everything at the same time. Industry is currently our number one market and we will identify its further potential in the future.
You mentioned the team will grow soon. How´s the hiring going on? Who are you looking for?
Currently, it´s twelve of us, three founders – me also acting as CTO and Jan Zizka acting as CEO dealing with hardware design as he has a PhD in computer vision. The two of us therefore represent the software, computer vision and computational side. Then we have Michal Maly, our COO, who has a PhD in AI dealing with the intelligence and contextual part of the product.
Regarding the hiring, we are currently looking for a senior machine learning specialist who has some experience with neuron networks. Then we need a C++ programmer also for GUI and frontend stuff plus some classic C++ coders because there´s a lot of work. Everything from the high-level work to easier tasks need to be covered. Also, we look for someone to support the sales team and the business. But still, if you have super skills, we will find you something to do. Super skilled people are always welcome J
People we already have challenge us every day so we can overcome our limits. The team has a really high raking and we want to keep it like that. Even if we want every superstar, I need to say there are not always only interesting tasks. All of us need to do implementations because the product must run 100% smoothly and that means usual coding, not only super interesting issue solving.
What´s your background? Tell us something more about your path to Photoneo.
I started with coding in the elementary school. I am convinced that coding should be mandatory already in kindergarten…
…you are not the first one to mention that…
Many people say foreign languages are important. And I agree, yes, foreign languages are super important and programming language is one of them. People should understand it is a real language as well. As I can communicate with people around the world in English I can as well communicate through a programming language.
But back to your story.
I wanted to program games and I went to high school where I got into informatics class and learnt to program better and participated on many interesting projects. Therefore Matfyz (Faculty of Mathematics, Physics and Informatics, Comenius University – author´s note) was a natural step for me. I tried to do some projects with virtual reality, I have always been very active and tried to do a lot of things. This may be my recommendations to others as well – always set your standards so high that if you wouldn´t be able to make it at the end, it would still be 10 times better than what was expected from someone else.
Somewhere around that time I met Jan Zizka and he became my supervisor. I wrote my bachelor thesis about the 3D scanner with a projector and a camera, so we already progressed quite quickly. We also won a student science conference so our cooperation got really profound. After the school we worked together in a company where we saw how the industry world works and gained the necessary experience. The main difference from the academic vision is that everything needs to work as close to 100% as possible because each percentage down means 10.000 loss from one million in terms of revenues. And that´s too much. This is how we work today as well.
And your path to Photoneo?
During one of our long conversations we came up with an idea how to make the scanning process that was actually robust and exact faster. That was the moment we both got goose bumps and thought “this might work!” We were constantly thinking about it, the whole process was huge, it was a unique technology no one ´s made before. One after one we left the company and launched Photoneo. That´s when Michal Maly joined us together with Branislav Pulis for Business Development. And since this time, since our early beginnings we started to cooperate with Ramesh Raskar from MIT as one of our advisors.
I suppose you needed an investor.
At that time we didn´t have any funding, we used our own resources and started only looking for an investor. First people we met were a little bit out of our technological scope. They would have acted solely as investors without any deeper involvement in our topic. Afterwards we contacted Credo Ventures and they were quite interested but they said they primarily focused on software and therefore didn´t have much experience in our field but would like to have a closer look. Andrej Kiska Jr. visited us and he actually liked it a lot.
Did the investors understand what your idea was about?
It took a while but they understood the concept. We talked together with 6 investors and 5 of them said they would support us so all of them saw we got a potential. The discussions were then more about the conditions and the form how it would look like. Finally, we made the deal with Credo. It took us some time to finish the legal stuff, due diligence and so on. We were a bit different because we got an investment of 2, 1 mil EUR which is quite unusual in this region for a project in a seed stage. It´s complicated to go to someone when you don´t have any history together and ask for 2 million for your project. The main part of the money covers the outsourcing of the sensor which was the greatest limit we faced and couldn´t overcome by ourselves as the founders.
How do you see the future of the 3D scanning? What are the greatest challenges?
It´s very interesting. The function of our eyes was already copied very well by the 2D camera, I don´t see so much space for another big improvement in this area, we´re close to having a perfect camera. The evolution didn´t “create” the 3D, we only have two eyes and somehow manage to access the 3D from the stereo, but this doesn’t mean we don´t want to go further. We as humans do not expect the automatic machines as cars or robots to achieve the same things we humans do. We expect more. If you asked anyone if he wanted to have self-driving cars all around with 10% fewer accidents than humans have, he would probably say no. People expect such cars to have no accidents at all. Therefore they would need a very specific sensor on a different level to achieve this.
We are sure that 3D cameras will be everywhere. It´s the same as with photography, you want to create a good-quality picture that can be printed as a big poster so everyone can look at it. The same applies for a 3D scanner. I want to scan a figure and afterwards I want to look at it or use it in a game. I scan it once and use it as a content. I believe everything will be digitalized at the end and it won´t take that long.
Recently a YouTube video of you scanning Buzz Lightyear got my attention. Do you think your technology can be used for example in the movie industry?
Definitely. We also got several requests from startups that wanted to make a revolution in cinematography. However, we heard from different people that this might take a while because the market is very artistic and a lot of innovations can be slowed down by the old school approach. But in general, it really makes sense to use it in the movie industry.
The problem is, however, how to make a 3D camera able to scan everything it´s pointed at. This part is very tricky. For example, if you have a glass ball, you are not able to optically scan it. But what you can do is to record face or body motions. The world doesn´t have to be scanned all the time, it can be scanned just once but properly. You have an actor, you scan him or her, create a model and you can finish the tuning.
How do you see your technology in regards to virtual reality?
If you put an Oculus on your head it´s an experience on a totally different level compared to a visit in a 3D cinema. There was a boom with 3D movies but we see in 3D all the time from the moment we open our eyes in the morning. But if you sit in a space where you see a world that doesn´t exist and you still can see it thanks to CG that will get you. What might happen is that they stop making movies in certain circumstances. For example someone working on Shrek today is closer to make a VR movie than someone making Expendables using the old technique.
To be able to move in a video, to turn around, and to record – all of this requires a great amount of data that is currently somewhere close to impossible to process. If someone works on Shrek, he renders each picture for hours, but if you sit in a virtual reality it´s more complicated. Each picture is different and unique when you just slightly move your head. You can have either stored all of the pictures which is close to impossible or you manage to render them in that particular moment. This would however require to digitalize everything, including the actors. Leonardo DiCaprio may be an actor in a new movie but he will be scanned and animated.
It might look like this, it might look a bit different in the future but still, it will take some time. We´ll see.
Photos: Photoneo / Tomas Kovacovsky