Ruzena Bajcsy
Appearance

Ruzena Bajcsy (born 28 May 1933) is an American engineer and computer scientist who specializes in robotics. She is professor of electrical engineering and computer sciences at the University of California, Berkeley, where she is also director emerita of CITRIS (the Center for Information Technology Research in the Interest of Society).
Quotes
[edit]- Well, in those days there were no CCD cameras, so it was a regular camera like yours and then attached was a digitizer, which provided you digital images, and then you worked offline on these digital images. And the challenge there was that first of all the memory in computers were very small. We worked on small images, so the resolution was not very good, worked on 64 by 64 pixels or at best 128 by 128 pixels, so the resolution was rough, and in texture you really need a higher resolution in order to capture the local properties, so that was a challenge. Technology just wasn’t there. Everything had to be made in the lab. We had to build digitizers and deal with swapping the memory to disk all the time, because the memory was typically 64K or so or even – then later on I worked on PDP-11, which had a 32K memory, the cache, so it was very different time, and therefore the approaches you took were quite different than today people use millions of pictures and do statistical analysis, very different.
- Well, no. There are many other changes. I mean, of course the speed and the memory size of the computer is phenomenally different, I mean, but the parallel isn’t that the distributed computing is also tremendously big difference and the display systems. At that time we used – when I worked at Stanford we didn’t have even a raster display. It was all just vector displays that displayed only the contours. At the end of my stay at Stanford we got the first raster display, which allowed you to display all the pixels, but in the beginning it was all just displaying contours.
- Well, that’s a different story. When I moved to Penn there was nothing, and so I had to build everything from scratch. My first master student, Adam Snyder, was an electrical engineer, and he and I built the digitizer, because we had to buy these analog cameras – camera, one camera – and then build a digitizer. And then I got some NSF initial grants that allowed me to buy a raster display so that we can visualize. The department had PDP-11, so I was working on that computer, and, as I said, it had just a very small memory, and so we were swapping things back and forth. I started to work on – I basically wanted to continue my Ph.D. work, and one of the first things that I started to look at was texture gradient and how can you interpret the three-dimensional information from the two-dimensional projection of the texture gradient. So that was the first Ph.D. with Larry Lieberman, right, was my first Ph.D. student, who then went to IBM research and worked on – had a group in robotics, because I was always interested in the three-dimensional interpretation of the visual information from the very beginning, and I always was interested in vision for, not just vision per se but what is vision for. I thought that it was a good way of testing your algorithms, if you can find the interpretation where you are or can you grasp things or move around? I mean, the robotics gives you a very good testing of your sensory processing, so that’s what I focused on. And then I think it was maybe ’73 or ’74 a man by name Britton Chance, who just passed away yesterday, 97 years old, who was a very prominent biophysicist, he invited me to look at some of these X-ray images from rat brains, because they were really interested in automating the image processing of these medical images. So he brought me into this group, and I found it very interesting, and so that started my career in medical image processing. I worked with him or for him for about six months, but then he was a very strong personality and my joke was you worked for him or you didn’t, and so I quit because I didn’t want to work for him. But then I continued with other people in the medical school at Penn on medical image processing, and I did some nice work there.
- Well, the first robotics system was when I got money to purchase a PUMA robot, which was one of the first robots available for academic labs. And around ’76, ’77, I started to connect the manipulation with the camera work, and grasping was one of the – and of course in those days they had only two-finger grasping, so I initiated with the Penn mechanical-engineering department to design a three-fingered robot, and so we were very much ahead of time in this regard. Around that time I also connected with a French laboratory in Toulouse, which had a rubber sensitive pressure sensor, and as part of the collaboration they made me what we called a “French finger,” which had sensors, so we were able to do this kind of a tactile connection. And so one of my first Ph.D. students – well, he wasn’t my first, but I don’t know, third or so. I think it was third or fourth. Peter Allen, who’s a professor now in Columbia University, he did the first work of this interaction of tactile and visual information, and so that was that.
- With the three-fingered hand? This was – well, I forget the name of the professor in the mechanical-engineering department. He passed away too. I kind of outlive all of my collaborators, but I think the student who designed it was Abramson, if I remember, three-fingered hand with a palm, and then we put some tactile pressure sensors on that, so that allow us to find where you are connecting. And actually, Ken Goldberg, who is here a professor, did his master degree with me on using these tactile sensors for recognition purposes. So, I also during that time connected with psychologists, who were prominent in this tactile perception. One of them was Susan Letterman, who is an outstanding professor-scientist in tactile perception at Queens University in Canada. And then she connected me with another professor, Roberta Klatzky, who is a professor at CMU in the psychology department, and we had a very fruitful collaboration. I learned quite a bit from these two people on how people perceive that – for example, people have certain procedures, motoric procedures when they want to find out how hard something is or what is the surface texture, and so there are these modules that people – procedures that people use for exploring. They called it “exploratory procedures.”
