Mira Murati

From Wikiquote
Jump to navigation Jump to search

Mira Murati (born 1988) is an Albanian engineer and business executive who has been serving as chief technology officer of OpenAI since 2018.

Quotes[edit]

  • We’re working on something that will change everything. Will change the way that we work, the way that we interact with each other, and the way that we think and everything, really, all aspects of life.
  • Journalist:There’s always a fear that government involvement can slow innovation. You don’t think it’s too early for policymakers and regulators to get involved?
Murati:It’s not too early. Everyone needs to start getting involved, given the impact these technologies are going to have.
  • Journalist:Let’s take a step back: There’s so much interest not just in the product but the people making this all happen. What do you think are the most formative experiences you’ve had that have shaped you and who you are today?
Murati:Certainly growing up in Albania. But also, I started in aerospace, and my time at Tesla was certainly a very formative moment—going through the whole experience of design and deployment of a whole vehicle. And definitely coming to OpenAI. Going from just 40 or 50 of us when I joined and we were essentially a research lab, and now we’re a full-blown product company with millions of users and a ton of technologists. [OpenAI now has about 500 employees.]
  • Journalist:Will GPT-5 solve the hallucination problem?
  • Murati: Well, I mean maybe. Let's see. We've made a ton of progress on the hallucination issue with GPT-4, but we're not where we need to be. But we're sort of on the right track. And it's unknown, it's research. It could be that continuing in this path of reinforcement learning with human feedback, we can get to reliable outputs. And we're also adding other elements like retrieval and search. So you can provide more factual answers or get more factual outputs from the model. So there's a combination of technologies that we're putting together to kind of reduce the hallucination issue.
  • Journalist:Is there a path between products like GPT-4 and AGI?
  • Murati:We’re far from the point of having a safe, reliable, aligned AGI [Artificial General Intelligence ]system. Our path to getting there has a couple of important vectors. From a research standpoint, we’re trying to build systems that have a robust understanding of the world similar to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities. The other angle has been scaling these systems to increase their generality. With GPT-4, we’re dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn’t even understand that high-level goal or high-level direction, it’s much harder to align it. It’s not enough to build this technology in a vacuum in a lab. We need this contact with reality, with the real world, to see where are the weaknesses, and where are the breakage points, and try to do so in a way that’s controlled and low risk and get as much feedback as possible.

External links[edit]

Wikipedia
Wikipedia
Wikipedia has an article about: