OpenAI Sam Altman Interview (2023/01/19)




StrictlyVC in conversation with Sam Altman, part two (OpenAI)
youtube: https://youtu.be/ebjkD1Om4uw



...♦...



目次

書き起こし (OpenAI の Whisper による)





...♦...




Whisper large-v2 による書き起こし(英語)


So moving on to AI, which is where you've obviously spent the bulk of your time since I saw you. We sat here three years ago and as I was teasing you, but it's true, you were telling us what was coming and we all thought you were being sort of like, you know, hyperbolic I guess and you were dead serious. Why do you think, I mean people knew that you were working on this, Google's working on this, why do you think that chat GPT and DALI so surprised people? I genuinely don't know. I've reflected on it a lot. We had the model for chat GPT in the API for like, I don't know, 10 months or something before we made chat GPT and I sort of thought someone was gonna just build it or whatever and that, you know, enough people had played around with it. Definitely if you make a really good user experience on top of something, like one thing that I very deeply believed was the way people wanted to interact with these models was via dialogue and I, you know, we kept telling people this, we kept trying to convince people to build it and people wouldn't quite do it so we finally said, alright, we're just gonna do it. But yeah, I think the piece is we're there for a while. I think one of the reasons I think DALI surprised people is if you asked, you know, five or seven years ago that kind of ironclad wisdom on AI. First it comes for physical labor, truck driving, working in a factory, working in a factory then truck driving, then the sort of less demanding cognitive labor, then the really demanding cognitive labor like computer programming and then and then very last of all or maybe never because maybe it's like some deep human special sauce was creativity and of course we can look now and say it really looks like it's gonna go exactly the opposite direction but I think that is not super intuitive and so I can see why DALI surprised people but I genuinely felt somewhat confused about why ChachiPT did, you know, one of the things we really believe is that the most responsible way to put this out in society is very gradually and to get people, institutions, policymakers, get them familiar with it, thinking about the implications, feeling the technology, getting a sense for what it can do and can't do very early rather than drop a super powerful AGI in the world all at once and so we put GPT-3 out almost three years ago and then we put it into an API like, you know, kind of I think was maybe like June of two, like two and a half years ago and the incremental update from that to ChachiPT I felt like should have been predictable and I want to like do more introspection on why I was sort of miscalibrated on that. So, you know, you had talked when you were here about releasing things in a responsible way. I guess what gave you the confidence to release what you have released already? I mean, do you think we're ready for it? Are there enough guardrails in place? It seems like it. We do, we have like an internal process where we kind of try to break things and study impacts. We use external auditors, we have external red teamers, we work with other labs and have safety organizations look at stuff. There are societal changes that ChachiPT is going to cause or is causing. There's I think a big one going now about the impact of this on education, academic integrity, all of that. But starting these now where the stakes are still relatively low rather than just put out what the whole industry will have in a few years with no time for society to update, I think would be bad. COVID did show us for better or for worse, or at least me, that society can update to like massive changes sort of faster than I would have thought in many ways. But I still think like given the magnitude of the economic impact we expect here, more gradual is better. And so putting out a very weak and imperfect system like ChachiPT and then making it a little better this year, a little better later this year, a little better next year, that seems much better than the alternative. Can you comment on whether GPT-4 is coming out in the first quarter, first half of the year? It'll come out at some point when we are like confident that we can do it safely and responsibly. I think in general we are going to release technology much more slowly than people would like. We're gonna sit on it for much longer than people would like. And eventually people will be like happy with our approach to this. But at the time I realized like people want the shiny toy and it's frustrating. I totally get that. I saw a visual and I don't know if it was accurate but it showed GPT-3.5 versus I guess what GPT-4 is expected. I saw that thing on Twitter. Did you? Was that accurate? No. Okay. That was a little bit scary. The GPT-4 rumor mill is like a ridiculous thing. I don't know where it all comes from. I don't know why people don't have like better things to speculate on. I get a little bit of it. Like it's sort of fun but that it's been going for like six months at this volume. People are begging to be disappointed. And they will be. Like it's you know people are gonna like the hype is just like we don't have an actual AGI and I think that's sort of what is expected of us and you know yeah we're gonna disappoint those people. Well I want to talk to you about how close that is. So you know another thing a few years ago you said and this was funny I thought. We were talking about revenue. This is before you announced your partnership with Microsoft and you said and I quote basically we made a soft promise to investors that once we built this generally intelligent system we will ask it to figure out a way to generate an investment return. We all kind of laughed at this and you said it sounds like an episode of Silicon Valley. I know but it is actually what I believe. Someone sent me that video a few weeks ago. I mean in some sense that's what's happening. Like we built a thing deeply imperfect as it is. We couldn't figure out how to monetize it. You could talk to it. We put it out into the world via an API and other people by playing around with it figured out all these things to do. So it was not quite the like ask thing and it tells you how to monetize but hasn't been hasn't gone totally the other direction either. You obviously have figured out a way to make some revenue. You're licensing your models. Not much. We're very early. Right but but so right now modeling I'm sorry licensing to startups. So you are early on and you know people are sort of looking at the whole of what's happening out there and they're saying you know you've got like Google which could potentially release things this year. You have a lot of you know AI upstarts nipping at your heels. Are you worried about what you're building being commoditized and I guess driving the value? I mean to some degree I hope it is. Like I the the future I would like to see is where access to AI is super democratized. Where there are several AGI's in the world that can kind of like help allow for multiple viewpoints and not have anyone get too powerful. And that and that like the cost of intelligence and energy because it gets commoditized trends down and down and down and the massive surplus there. Access to the systems. Eventually governance of the systems benefits all of us. So yeah I sort of hope that happens. I think competition is good. At least you know until we get to AGI I like deeply believe in capitalism and competition to offer the best service at the lowest price. But that's not great from a business standpoint. We'll be fine. I also find it interesting you say differing viewpoints or these AGI's would have different different viewpoints. I guess how? I mean they're all being trained on like all the data that's available in the world. So how do we come up with differing viewpoints? What I think is gonna have to happen is society will have to agree and like set some laws on what an AGI can never do or what one of these systems should never do. And one of the cool things about the path of the technology tree that we're on which is very different like before we came along and it was sort of deep mind having these games that were like you know having agents play each other and try to deceive each other and kill each other and all that which I think could have gone in a bad direction. We know these language models that can understand language and so we can say hey you know model here's what we'd like you to do here are the values we'd like you to align to and we don't have it working perfectly yet but it works a little and I'll get better and better and the world can say all right here are the rules here's here's the very broad bounds of very broad like absolute rules of a system. But within that people should be allowed very different things that they want their AI to do and so if you want the super like you know never offend safe for work model you should get that and if you want an edgier one that you know is sort of like creative and exploratory but says some stuff you like might not be comfortable with or some people might not be comfortable with you should get that and I think there will be many systems in the world that have different settings of the values that they enforce and really what I think and this will take longer is that you as a user should be able to write up a few pages of here's what I want here are my values here so I want the AI to behave and it reads it and thinks about it and acts exactly how you want because it's like should be your AI and you know it should be there to serve you and do the things you believe in. So that to me is much better than one system where like one tech company says here are the rules. That's really interesting. So also when we sat down it was right before your partnership with Microsoft so when you say we're gonna be okay I wonder if. No I had nothing about that we're just gonna build a fine business like even if the competitive pressure pushes the price that people will like pay token down we're gonna do fine. We also have this like cap profit model so we don't have this incentive to just like capture all of the infinite dollars out there anyway and to like generate enough money for our equity structure like yeah I believe we'll be fine. Well I know you don't you're not you know crazy about talking deal about deal-making so we won't but can you talk a little bit about your partnership with Microsoft I guess how it's going and how they're using your tech. It's great they're the only tech company out there that I think we I'd be excited to partner with this deeply. I think Satya is an amazing CEO but more than that human being and understands so do Kevin Scott and Mikhail who we work with closely as well like understand the stakes of what AGI means and why we need to have all the weirdness we do in our structure and our agreement with them. And so I really feel like it's a very values aligned company and there's some things they're very good at like building very large supercomputers and the infrastructure we operate on and putting the technology into products there's things we're very good at like doing research and it's been a great partnership. Can you comment on whether the reports are accurate that it's going to be in Bing and Office or maybe it's already in those things? You are a very experienced and professional reporter. You know I can't comment on that. I know you know I can't comment on that. You know I know you know you can't comment on that. In the spirit of shortness of life and our precious time here why do you ask? Sam. I'm genuinely curious. Well I mean. Like if you're asking a question you know I'm not gonna answer. Well I thought you might answer that one. No okay I know there's some things you don't answer but I got to try. Another company's product plans I'm definitely not gonna touch. Well okay let me ask you about yours then. Do you is your pact with Microsoft does it preclude you from building software and services? No no we build stuff. I mean we just as we talked about chat GPT we have lots more cool stuff coming. Okay and you can what about other partnerships other than with Microsoft also? Yeah yeah I mean like again in general we are very much here to build AGI and products and services are tactics and service of that partnerships too. But important ones and we like we really want to be useful to people and I think if we just build this in a lab and don't figure out how to get out into the world that's that's like somehow we're really falling short there. Well I wondered what you made of the fact that Google has said to its employees we just it's too imperfect it could harm our reputation we're not ready. I hope I hope when they launch something anyway you really hold them to that comment. I'll just leave it there. Okay let me ask you this what did you think when they suspended that seven-year veteran of their responsible AI organization who thought that the chat bot that he was working on for them had become sentient? You know I read I remember reading a little bit about that but not enough that I feel like I can comment like I basically only remember the headline. I guess I thought at the time he sounded like a crackpot and now that I've seen chat GPT I think maybe that's why you rushed out chat GPT because yours is amazing and if theirs is also amazing you know. I haven't seen theirs I would I think they're like a competent org so I would assume they have something good but I I don't know anything about it. So we talked earlier on about education people are scared and excited I was just telling you that my 13 year old came home from school a couple days ago and his teacher was telling him not to be scared by AI but you know you guys are gonna have to sort of develop different skill sets in your lifetime that are valued. So but there is a lot of concern the New York Public School system just restricted access to chat GPT which is probably not as big a story as it sort of seemed from the headline but what do you tell educators what are misconceptions about what you're working on how can you kind of allay their concerns? Look I get it I get why educators feel the way they feel about this and and probably like this is just a preview of what we're gonna see in a lot of other areas. I think this is just the new we're gonna try to you know do some things in the short term and there may be ways we can help teachers be like a little bit more likely to detect output or anyone output of like a GPT like system but honestly a determined person is gonna get around them and I don't think it'll be something society can or should rely on long term. We're just in a new world now like generated text is something we all need to adapt to and that's fine we adapted to you know calculators and changed what we tested for in math classes I imagine. This is a more extreme version of that no doubt but also the benefits of it are more extreme as well. You know we hear about we hear from teachers who are understandably nervous about the impact of this on homework. We also hear a lot from teachers who are like wow this is like an unbelievable personal tutor for each kid and I think that I have used it to learn things myself and found it like much more compelling than other ways I've learned things in the past. Like I would much rather have chat GPT teach me about something then go read a textbook. So you know it's like an evolving world and we'll all adapt and I think be better off for it and we won't want to go back. Well my 15 year old son came home one day and was using it to understand some science concepts better which I thought was really great. Yeah but the same kid also was like could I use this to write my papers? So I did want to ask about watermarking technologies and other techniques. So it sounds like. No I think you know we will we will experiment with this other people will too. I think it is important for the transition but I would caution policy national policy individual schools or whatever from relying on this because I don't like fundamentally I think it's impossible to make it perfect. You'll think you know people will figure out how much of the text they have to change there will be other things that modify the outputted text. I think it's good to pursue and we will but I think what's important to realize is like the playing field is shifted and that's fine. There's good and bad and we just figure out like rather than try to go back we figure out the way forward. So even if you develop technologies that could be sort of rendered like irrelevant in a few months. I suspect. Yeah I also wanted to ask anthropic arrival I guess founded by a former. Yeah again like I arrival in some sense I think super highly of those people like very very talented and multiple AGI's in the world I think is better than one. Sure well what I was gonna ask and just for some background it was founded by a former open AI VP of research who you I think met like when he was at Google but it is stressing an ethical layer as a kind of distinction from other players and I just wondered if you think that systems should adopt you know a kind of a common code of principles and and also whether that should be regulated. Yeah I mean that was my earlier point I think society should regulate what the kind of the wide bounds are but then I think individual users should have a huge amount of liberty to decide how they want their experience and their interaction to go. So I think it is like a combination of society you know like we have there are a few asterisks on the free speech rules and society has decided like free speech not quite absolute. I think society will decide also decide like language models not quite absolute but there are a lot of there's a lot of speech that is legal that you find distasteful that I find distasteful that he finds distasteful and we all probably have somewhat different definitions of that and I think it is very important that that is left to the responsibility of individual users and groups not not one company and that the government there like govern and not dictate all of the rules. There are a lot of people here who I think want to ask you questions and I know you can't stay forever I wanted to ask one more question before I turn it over to the crowd. Video is that coming? It will come I wouldn't want to make a confident prediction about when obviously like people are interested in it will try to do it other people will try to do it it could be like pretty soon it's a it's a legitimate research project so it could be pretty soon it could take a while. Okay let's see who would like to ask Sam a question oh great I got to run over here. Thank you. Hi Fusion when do you think there will be a commercial plant actually producing electricity economically? Yeah I think I think by like 2028 pending you know good fortune with regulators we could be plugging them into the grid. I think we'll do it you know a really great demo well before that like hopefully pretty soon. Hey Sam thank you. What is your and I don't know if you are allowed to answer this but what is your like best case scenario for AI and worst case or more pointedly what would you like to see and what would you not like to see out of AI in the future? I mean I think the best case is like so unbelievably good that it's like hard to I think it's like hard for me to even imagine like I can sort of I can sort of think about what it's like when we make more progress of discovering new new knowledge with these systems than humanity has done so far but like in a year instead of 70,000. I can sort of imagine what it's like when we kind of like launch probes out to the whole universe and find out really you know everything going on out there. I can sort of imagine what it's like when we have just like unbelievable abundance and systems that can sort of you know help us resolve deadlocks and improve all aspects of reality and kind of like let us all live our best lives but I can't quite like I think the the good case is just so unbelievably good that you sound like a really crazy person to start talking about it and the bad case and I think this is like important to say is like lights out for all of us. I'm more worried about like an accidental misuse case in the short term where you know someone gets a super powerful like it's not like the AI wakes up and decides to be evil and I think all of the sort of traditional AI safety thinkers reveal a lot about more about themselves than they mean to when they talk about what they think the AGI is gonna be like but but I can see the accidental misuse case clearly and that's that's super bad. So I think like yeah I think it's like impossible to overstate the importance of AI safety and alignment work. I would like to see much much more happening but I think it's more subtle than most people think and that you know you hear a lot of people talk about AI capabilities and AI alignment as an like orthogonal vectors and you know you're bad if you're a capabilities researcher and you're good if you're an alignment researcher. It actually sounds very reasonable but they're almost the same thing like deep learning is just gonna like solve all of these problems and so far that's what the progress has been and progress on capabilities is also what has let us make the systems safer and vice versa surprisingly and so I think and none of this sort of soundbite easy answers work. Alfred Lynn told me to ask you and I was gonna ask anyway how far away do you think AGI is? He said Sam I'll probably tell you sooner than you thought. The closer we get the harder time I have answering because I think that it's gonna be much blurrier and much more of a gradual transition than than people think. If you if you imagine like a two by two matrix of sort of short timelines until the AGI takeoff era begins and long timelines until it begins and then a slow takeoff or a fast takeoff. The world I think we're heading to and the safest world the one I most hope for is the short timeline slow takeoff but I think people are gonna have hugely different opinions about when and there you kind of like declare victory on the AGI thing. Thank you Sam. 30 seconds. First is when you spoke a few years ago I was highly skeptical and so you put me on notice felt like Netscape when I was a teenager. Thank you very much. The question I have is less science and technology and more geography which is what's your take on San Francisco and Southern Valley? Because you referenced it earlier. Man I love this city so much and it is so sad what like the current state is. I do think it's like somewhat come back to life after the pandemic but yeah like when you walk down Market Street at night or like if I try to walk home and walk through the Tenderloin like late it's not great and I think it's like a real shame that we put up with treating people like this and we continue to elect leaders who sort of don't think this is okay but also don't fix the problem. I totally get how hard this is. I totally get how complicated this is. I also I think unlike other tech people will say that tech has some responsibility for it but other cities managed to do better than this like it is a solvable problem and to entirely blame tech companies who don't get to run the city that doesn't feel good either and I wish there could be a more collaborative partnership instead of all of the finger-pointing. I am super long in person work. I am super long the Bay Area. I'm super long California. I think we are probably going through some trying times but I am hopeful we like come out of the fire better for it. Can you talk a little bit more about what you expected the reaction to chatGPT to be and also would you prefer that there wasn't so much hype? I would have expected maybe like one order of magnitude less of everything like one order of magnitude less of hype, one order of magnitude less of users. Yeah I would have expected sort of one order of magnitude less on everything and I think less hype is probably better just as like a general rule that one of the sort of strange things about these technologies is they are impressive but not robust and so you use them in a first demo you kind of have this like very impressive like wow this is like incredible and ready to go. You use them a hundred times you see the weaknesses and so I think people can get a much sort of a false impression of how good they are. However that's all going to get better. The critics who point these problems out and say well this is why it's like you know all like a like you know fake news or whatever are equally wrong and so I think it's good in the sense that people are updating to this thinking hard about it and all of that. Can I ask how do you use it? You know when we were emailing back and forth I thought am I talking to Sam? I have occasionally used it to summarize super long emails but I've never used it to write one. I actually summarize it for a lot it's super good at that. I use it for translation. I use it to like learn things. So two quick questions when people talk about your technologies being the end of Google how do you unpack or how do you understand that and then also your thoughts on UBI? Yeah I think whenever someone like talks about a technology being the end of some other giant company it's usually wrong. Like I think people forget they get to make a counter move here and they're like pretty smart pretty competent. But I do think it means there is a change for search that will probably come at some point but not as dramatically as people think in the short term. My guess is that people are going to be using Google the same way people are using Google now for quite some time and also Google for whatever this whole like code red thing is it's probably not going to change that dramatic would be my guess. UBI I think UBI is good and important but very far from sufficient. I think it is like a little part of the solution. I think it's great like I think we should as AGI participates more and more in the economy I think we should distribute wealth and resources much more than we have and that'll be important over time. But I don't I don't think that's gonna like solve the problem. I don't think that's gonna give people meaning. I don't think it means people are gonna like entirely stop trying to create and do new things and whatever else. So I sort of would consider it like an enabling technology but not like a plan for society. Is that why your company though is a capped profit company? I mean are you planning to take the proceeds that presumably you're presuming you're gonna make someday and you're going to give them back to society? I mean is that yeah whether we do that just by like saying here's cash for everyone totally possible or whether we do that by saying like here is you know we're gonna like invest all of this in a nonprofit that does a bunch of science because scientific progress is how we all make progress. Unsure but yeah we would like to operate for for the good of society and I think I'm like a big believer in sort of design a custom structure for whatever you're trying to do and I think AGI is just like really different and so the cap will turn out to be super important. Can I ask selfishly so if UBI is only part of the solution and I've got teenagers and we all have jobs what should we be preparing for you know as I said my my son's teacher was trying to prepare them but of course you would maybe be better positioned to have some ideas on this. Resilience, adaptability, ability to like learn new things quickly, creativity although it'll be aided creativity. And aided learning things quickly I mean I feel like my for sure like in some sense before Google came along there was like a bunch of things that we learned like memorizing facts was really important and that changed and now I think learning will change again and we'll probably adapt faster than we think. Okay I think we have to let Sam go but how about like two more questions thank you thank you so much. The future workplace for tech workers you think it'll be out of the home out of the office what percent in each? I have like I look I think people are gonna do different things I don't think there will be one answer and I think people will sort the people who want fully in person will do that people want fully remote will do that I think a lot of people will do like hybrid I have always been a fan of going to the office a few days a week and work at home a day or two a week. YC was like very much like being a YC partner was very much that way opening I was that way before the pandemic opening eyes that way now. I personally am skeptical that fully remote is gonna be the thing that everyone does and I think even the people who thought it was a really good idea are now sort of saying like hmm the next like 40 years sitting in my bedroom looking at a computer screen on zoom do I really want that am I really sure with some skepticism there are some people who do what I think has been the hardest is companies who are the wrong kind of hybrid where it's not like you know these four days everyone's in these two days everyone's home whatever but it's a come in if you want be at home if you want and then you have like half the people in the meeting this little box on the screen have people in person it's clearly a way better experience in person the people that are not there like do get sort of like left out that I think is the hardest but it's all gonna like continue to evolve and people will sort into what they want I I would bet that many of the most important companies of this decade are still pretty heavily in person. Do you work for CBRE? No. Maybe that so one of you you guys want to wrestle for it? He has a Y seizure so let's do him too. So given your experience with open AI safety and the conversation around it how do you think about safety and other AI fields like autonomous vehicles? Yeah I think there's like a bunch of safety issues for any new technology and particularly any narrow vertical of AI and we kind of have learned a lot in the past few decades of or more than a few past like seven or eight decades of technological progress about how to do really good safety engineering and safety systems management and a lot of that about how we like learn how to build safe systems and safe processes will translate imperfect there will be mistakes but we know how to do that. I think the AGI safety stuff is really different personally and worthy of study as its own category and they're there because the stakes are so high and the and the irreversible situations are so easy to imagine we do need to somehow treat that differently and figure out a new set of safety processes and standards. So you said like right now is one of the best time to start a company I found that counterintuitive maybe you can explain why and what companies should I tell my friends to go start because I have actually pretty few smart friends who are looking to do something. The only thing that I think is like easy in a mega bubble is capital so it was a great time to raise capital for a startup from say 2015 to 20 to go around 20 to end of 2021 but everything else was pretty hard like pretty hard to hire people it was like pretty hard to like rise above the noise it was pretty hard to do something that mattered without having like thousands of competitors right away and a lot of those startups that looked like they were doing well because of the same reason capital was cheap found that actually they were not able to like build as much enduring value as they hoped. Now raising capital is like tough it's still sort of reasonable I think it like seed stages but it certainly seems much tougher at later stages but all the other stuff is much easier. You actually can concentrate talent people are not constantly poached you can rise above the noise threshold whether that's with like customers the press you know users whatever. I would much rather be have a hard time raising capital but an easier time doing everything else than the other way around so that's why I think it's a better time. In terms of like what I would do now I would probably like go do AI for some vertical. It brought to mind this information story about Jasper that I thought was interesting it's a customer of yours copywriting company relying on your you know AI language models and now chatGBT is so good that it's got a kind of like find a new reason for being I think. Is that a danger for many startups I guess which ones if so? I heard about this article but I didn't read it. I'm but if I understand that it was basically like the company was saying like you know we had built this thing on GPT-3 and now chatGBT is available for free and that's causing us problems that is that right? I think probably the wrong thing to do in to make an AI startup well let me say I think the best thing you can do to make an AI startup is the same way that like a lot of other companies differentiate which is to build like deep relationships with customers a product they love and some sort of mode that doesn't have to be technology and network effect or whatever and I think a lot of companies in the AI space are doing exactly that and you've got a plan that open AI's models are gonna get better and better. We view ourselves more as a platform company but we will do some you know like a business strategy I've always really respected is like the platform plus killer app together and so we will probably do something to help show people what we think is possible but I think you want to build a startup and I think Jasper is gonna do this or already is doing this that has like deep value on top of the fundamental language model and we are a piece of enabling technology. Is there anybody knowing what you know or you think you see coming that should like basically drop what they're doing right now because they're cooked? Like I'm sure if I had more time to think about it I could come up with an answer but in general I think there's gonna be way way more new value created like this is gonna be a golden few years then people who are should just like stop what they're doing. I would not ignore it I think you've got to like embrace it big time but I think the amount of value that's about to get created we have not seen like since the launch of the iPhone App Store something like that. It's incredible this is gonna be an amazing year I'm sure I'm so thankful. Thank you for having me. Oh my gosh Sam thank you.


目次に戻る




...♦...



Whisper large-v2 による書き起こし
(日本語への翻訳込み)


次にAIについて皆さんの時間を 見ることができる場所です3年前にここに座りましたが私が言っていたように皆さんが何が起きているかを 教えてくれました皆さんはハイパーで 何かを考えていましたとても真面目でした皆さんが グーグルで 働いていたことを知っていましたがどうしてチャットGPTとDALIが 人々に驚きましたか?本当に知りませんたくさん考えてきました私たちは10ヶ月前にチャットGPTのAPIを 作り始めました誰かがそれを 作ると思っていたのですがそれを使っている人が 多くいました何か上に 使用していることを作るときに 人々はこのモデルと 交流するように思っていました人々に言っていたように 作ることを試していました人々は やっていないと言いましたでも 長い間 作り続けられていましたDALIが人々を驚かせる理由の一つは5年か7年前に AIによる鋼の知恵を聞いたことがありますまず 物理的な労力を求めること車で働くこと車で働くと工場で働くそして 知的労力を求めることそして コンピューターのプログラミング最後に 何か深い人間の特別なソースがあるかもしれません今は 逆に行くようになっていると思っていますがそれは 非常に不自然ではありませんDALIが人々を驚かせる理由は わかりますでも 私はCHAI-CPTがどうして作られたのかを 少し混乱していました私たちが信じているのは社会によるこの技術を 最も負担にする方法は徐々に多くの人に 人々機械 政策者に理解されて 理由を考えること技術がどうして作られるかを 知ることができるようになるとは思いません世界中の超強力なAIを 全て一緒に落とすのではなくGPT-3を3年ほど前に出したのですその後 APIに入れました2年半前の6月に出したのかもしれませんそしてその後の更新について私は予想できるようになったのですその予想を進めるためにその予定を間違えていましたあなたがここでリリースすることに関して 話を聞きましたあなたがすでにリリースしたものを 発表することにどのような自信を持っているのか準備はできていますかガードレールは足りないのかそうかもしれません私たちは中間的なプロセスを行っています研究の影響を破壊するようなプロセスを私たちは外部のアーティストを利用しています外部のレッドチームを利用しています私たちは他のラブに働いて安全組織を組織を見つけます社会的変化はChatGPTが起こることになります今は大きな変化が起こっています教育や学術のインテグリティなどの影響でも今から始めると価格はまだかなり低いのです全ての業界が数年間社会にアップデートする時間がないと悪いことだと思いますコロナウイルスでの変化は社会が大きな変化に早く変わることができるようになりましたでも今までの経済影響の影響によって少しずつ良くなってきましたChatGPTのような弱いシステムを今年は少し良くなった来年は少し良くなったその方が良いと感じますGPT-4は今年の初半に出てくると思いますか?来年は出てくると思います安全に行うことができると自信があります私は技術を発信することに大きな影響を与えます人々が好きなように長く時間を過ごすようにそれを行うことができるように気づきましたGPT-3.5とGPT-4の違いを見ましたTwitterで見ました正確か?全く違います少し怖かったですGPT-4の言語はとても不思議ですどこから来ているのかわからない人々が何故何より良いものを予測するのかわからない少しはわかります楽しいことですが6ヶ月間続いているのです人々は失望しているのですそして失望するでしょう人々はアクションがあるのです実際のAGIはありません私たちの期待を受け継いでいます失望する人々を失望させるのですその辺りについて話したいと思います数年前にお話ししたことがありますRevenueについてマイクロソフトとの協力を発表した前にあなたが言ったことです私はこれを記事に書きました基本的に私たちは投資者に軽くのみこの通常的な知識システムを建てたらそれを解決する方法を求めることにインベストレートを生み出すことを求めることに求めることに私たちはこのことを笑っていましたあなたはシリコンのエピソードのように言いました私は知っていますが私はそう信じています数週前にそのビデオを送ってくれました何かの意味で何が起こっているかを知っていました私たちは非常に不完璧なものを建てたのです私たちはそれを金にする方法を決めることができませんでしたあなたはそれに話すことができます私たちはそれを世界にAPIを通して出してきました他の人たちもそれを遊んでそれらのことを解決することができましたそれはあまりにも難しいことではありませんそれがあなたに金にする方法を教えてくれますしかしそれは完全に逆の方向に行っていませんあなたは金を稼ぐ方法を決めましたあなたはモデルを許可していますあまりにも私たちは非常に早いです今はモデルすみませんスタートアップを許可するあなたは早いです人々はあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたのあなたの私の美味しいjąに細かいへはい suggestingbr decisivestartできないstapperと割れあるep 2localkillgodore誰もが力を貸してくれないように知性とエネルギーの価値が価値を貸すことで販売が落ちていくそしてその価値の高い価値をシステムに侵入するシステムの管理が私たち全員に利用できるそういうことを願っています競争は良いことだと思いますAGIに行くまで私たちの価値の高い価値をシステムに侵入すると考えていますそれは商売的な意味では良くないと思います良くないと思います私はこのAGIの見解が違いを示すのが面白いと思いますどのように見解が違いを示すのかどうやって見解を示すのか全てのデータを世界中に提供されているそういうものを見つけなければならないことだと思います社会はAGIが何かをやめることができないという法律を設定する必要があります私たちが取り組んでいるテクノロジーの道を見つけることがとても面白いことです私たちがアジアが互いに遊んで互いを殺して悪い方向に行っていることを知っている言語モデルができるようになっていますモデルとしてあなたが何をしたいかあなたと合わせる価値があることを知っているそれができると世界はこのように言語を見つけることができる人々はAIを作ることができるあなたがサーフォーワークモデルを使っているならそれを得るアジアのアジアのアジアのアジアのアジアのアジアのアジアの実の価値がありますこれが長くなると思うのはシステムに何が求められるのかアクセサリーアタックで何がアジアの使命を検証しやすいそういう世界で大に可能なアジアの試験の扱いを導くよう正சここにある規則があると言われている本当に興味深いですね私たちが会社との協力があった直前にあなたがOKと言うとき私は思うとそれは問題ではありません私たちは販売が良い商品を建てるだけです競争力の強さが人々が払ってくれる値段を下げるとき私たちは良い商品を作ることができる私たちは販売の良い商品を作ることができる私たちは販売の良い商品を作ることができる私たちは販売の良い商品を作ることができる私たちの財政構造にお金を入れることができる私はそれが良いと思っていますあなたはディアマイキングについて話すのがとてもクレイジーなので、私たちはそれをしませんあなたはマイクロソフトとの協力についてどのように進んでいるかを話してくださいあなたのテクニックを使用している方法はそれは素晴らしいことです私たちはあなたが私たちの中で最も興味深いパートナーになるテクニック会社の一つですサティアは素晴らしいCEOですでも、彼は人間よりもケビン・スカットとマキルが私たちとの関係が深いですAGIの意味を理解することができ私たちが何をするかを知る必要があります私たちの構造や彼らとの協議が必要です私はそれが価値のある会社だと感じます彼らは大きなスーパーコンピューターを建て私たちが運営するインフラをプロダクトに入れます私たちが研究をすることが得意なものがありますそれは素晴らしいパートナーシップですBingやOfficeにある報告が正確かあるいは、そのようなものにあるかをあなたは非常に経験があるプロレスレポーターです私はそのことをコメントできませんあなたも私もあなたもあなたも私もあなたもあなたも私もあなたもあなたも私もあなたもここで何をするのですか?サム本当に興味深いあなたが質問しているなら私は答えを出さないあなたはその点を回答すると思った私が答えを出さないことがあることがある私は試してみます私は別の会社のプロダクトを絶対に触れないあなたの話について聞いてみましょうマイクロソフトはあなたのパクトですか?あなたがソフトウェアとサービスを構築するのを拒否するのでしょうか?そうです 私たちはChatGPTについても 話しており 新しい機能がたくさんありますマイクロソフトとの他にも 他のパートナーシップはありますか?はい 全体的にAGIを建てるために 私たちはここにいますプロダクトやサービスは タクティックとサービスの一部ですパートナーシップもですでも重要なものは 人に役立つことが大切です研究を始めるだけで 世界に出ることができないと私たちは 間違っていると思いますGoogleの言葉を 働いている人たちに不完璧で 人気を損なえられると 考えましたか?何かを発表するときに 本当にその言葉を持っていると私は思っていますここで終わりにします質問ですAGIの責任者の7年の ベテランを取り込んだとき彼が働いていたチャットボットは 何だと思いましたか?少しだけ読んだことがありますが コメントできるようになっていませんヘッドラインだけを覚えています彼が言ったことは クラックポットだと思いました今チャットGPTを見たときに 彼が言ったことは彼のことは素晴らしいのですが 私のことも素晴らしいのです私は彼のことは 見たことがありません彼らは 上手なオーグだと思うので私は何も知りません先ほど 教育について話しましたが 皆さんは恐れています私が13歳の時に 学校から帰ってきたとき先生が教師に言いましたAIで恐れてはいけませんが皆さんは 人生に価値のある 技術を発揮する必要がありますでも 多くの人が 心配していますニューヨーク公立校の 教育システムはチャットGPTを 限らず教育を行っています教育者に教えてください 皆さんが 勤めることについての誤解は何ですか私は理解しています教育者がこのような気持ちを 持っている理由が分かります多分これは他の多くの 場所で見ることができるのですが私は新しいものを 作りたいと思っています短期間で何かを行ってみます教師がGPTのような システムを発揮する方法を教えてもらえるかもしれませんが正直に言うと 決められた人が 周りに来てくれると思います長期的に 社会が 依頼することはないと思います今は新しい世界です生まれたテキストを 全ての人に対応する必要がありますそれは大丈夫です私たちは 数学を使って私たちのマッスルクラスで 測定したものを変えましたこの方法は もちろん 更に極端な方法ですしかし 彼らの利益も 更に極端です教師の方が 教えるときは家事についての影響に 心配すると思いますが教師の方が 教えてくれることを 教えてくれるときも私は自分で 学ぶことに 使っています私は 過去に学んだことを 学ぶことに 勉強しています教師の方が 教えてくれるときは 教えてください今は 進化している世界です私たちは 全ての人に対応することができると思います戻りたくないです私の15歳の息子は 家に1日帰って科学の概念を 理解するために 使っていましたとても素晴らしかったです同じ子も 私の文章を書くために 使っていたのですか?水の印刷技術や 他の技術について お聞きしたいと思いますそれは 私たちが考えたものではないと 思いますか?私たちは 実験を行うでしょう他の人たちもそうです私は 進化について 重要だと思います私は 国家的な政策や 個人的な政策を国家的な政策や 個人的な政策をこの問題に 依頼することを 強調します私は 基本的に 完璧にできることは できないと思っています文字の数を変更することが 必要だと考えています文字の出力を変更する 別のこともあります私たちは 進化することが 良いと思っていますでも 重要なことは 航空機の航空線が変わったことですそれは大丈夫です良いと悪いと 考えながら進めるのではなく 進めることを考えます技術を開発するときに 何か変更されることはないのですか?そうですまた 前のアンサーの方に 質問したいのですがまた 前のアンサーの方に 質問したいのですがアンサーの方は 非常に人数が多く非常に人才が多く世界のAGIの数が 多くの人に比べるといいと思いますはい 後で質問します前のオープンAIの 研究部長によることですGoogleで会った時に 会ったと思います他のプレーヤーからの 別のエッセンスを示していますシステムが基本的な規則を 適用するとは思っていますか?そうです 前にも言いましたが社会は 規則を適用するべきだと思いますそれによって 個人の利益が大きく変わると自分の経験や インタラクションがどうなるかを決める必要があります社会との組み合わせはフリースピーチの規則に 少しのアストラクトがあります社会はフリースピーチは 全然違います社会も言語モデルは 全然違いますでも 言語モデルは 自分が不公平だと言うことが多いそれぞれの定義が 違っていると思いますそれが 個人の利益に 責任を与えることに大切だと思います一つの会社ではなく 政府が指導しないように質問が多い方がいますもう一つ質問がありますビデオは来るんですか?来るかもしれませんが 確実に予想していないですもちろん 人が興味を持っているので他の人たちも 試してみることができます研究ができるので 時間がかかるかもしれませんサムに質問をお願いしますあ いいですねここに来てくださいフュージョンです電気を生産する 商売機器は何年になると思いますか?経済的に2028年までにレギュラーの運営を 取り入れるといいと思いますそれまでに 本当に素晴らしいデモを 行うかもしれませんもしかしたら 結構早くサム ありがとうこれについて 答えてもらえればと思いますがAIの最善と最悪の場合は どういう場合がいいですか?もしくは 今後のAIで 見たいことや 見たいことは?最善は 本当に素晴らしいことです私は想像できないことが あります私は今までに 人類が 新しい知識を発掘することが一番大切だと思います今までの人類は 70万人に比べて 1年でこれまでの人類は 全世界にプローブを発射し世界中の状況を 見つけられました今までの人類は 本当に素晴らしいものを 発掘して現実の全てを 改善してくれました人類は 最善を 発揮することができるようになりましたでも 最善は 本当に素晴らしいものです本当に クレイジーな人に 言うことができるようになります最悪の場合は 私たちのために 明るくなります最悪の場合は 短期間で 意外な使い方をすることですAIは 悪いものを発見することは 出来ません伝統的なAIの安全考えは自分のことよりも 自分のことよりもAIがどういうものかを 考えることが多いですでも 自分が発見した 偶然の使い方は 明らかに見えますそれは 本当に悪いことですAIの安全性と 合わせ方の重要性は 誇りに思えませんもっと多くのことが 起こるように願っていますでも 多くの人が考えるよりも 細かくなりますAIの可能性や 合わせ方を オートゴナルな形で考えると人が考えると 可能性のある研究者や 合わせ方のある研究者に比べて悪いと聞きますでも 同じようなことは 多くの人が考えると 理解できるようになりますディープ・ラーニングは 問題を解決するだけです今までの進歩は そのようなものですそして 可能性の進歩は 自分が安全に作り出すことを 求めたものですそして 相当のことも 理解できるようになりますだから 簡単な答えを 聞くことができませんアルファード・リンが 聞いてくれたのですがAGIはどのくらいの 距離ですか?サムは 早く答えてくれます答えが出る時間が 長いほどに近づいてきていますだって 答えが出る時間が 長くなると考えると人々は考えるよりも 進歩が遅いと考えます2つのマトリックを想像してみてAGIの時間帯が長くなったら 早く答えるか 遅く答えるかどちらがいいかを考えます私が望む最も安全な世界は 時間帯が長くなったら 遅く答えるかでも 人々は アルファード・リンの答えが 長くなったら 勝ちを示しますサム ありがとう30秒前に 聞いたときに 非常に疑問でした私を気に入っていたのですが私が小学生の頃 NetScapeを 使っていたようですどうもありがとうございました私の質問は サイエンスとテクノロジーと 地形の分野を少しずつサムシスコと南のバレットについて どう思いますか?先ほども話しましたが私はこの都市が大好きです今の状況は 悲しいですパンデミックの後 生き返りがあるとは思いますが夜にマーケットストリートを 歩いて遅くても 家に帰っても 遅くてもそれは悲しいです私たちは 人々をこのように 扱っているのは 本当に恥ずかしいです私たちは 問題を解決することを 決めるリーダーを選ぶことを続けていますこれはとても難しいことを 理解していますこれはとても複雑なことです他のテクノロジーの人によってはテクノロジーに責任があると言われていますが他の都市はこれよりも 良くしているようです問題は解決できます全てのテクノロジーを 責めることができない都市を運営することは 良くないです私はフィンガーポイントを 取りにくくしたいです私は インパーソンの仕事を 長くしています私は 北海道を長くしています私は カリフォルニアを長くしています私たちは 何度も試合をすることができると思いますでも 私たちは 火事に 出てくることを願っていますGPTの反応について 話してもらえますか?もしくは 人気があまりなかったのか?あなたが 何をしているのかについてそれに対する 価値があるのか?私は 全てのテクノロジーより一部の価値が低いと 考えていました人気が低いと考えていましたユーザーの人気が低いと考えていました全てのテクノロジーより 一部の価値が低いと考えていました私は 人気が低いと考えているのが 良いと思います私が言う 基本的な規則ですこのテクノロジーの 変わった点は驚きのあることではなく 固定的なことです最初のデモで使用すると驚きのあることがありますこれは 本当に素晴らしいです100回使用すると 弱点が見えます人々は 正しいことを 感じることができると思いますしかし 全てが 良くなるでしょうこの問題を 解決するクリティックは 正しいことだと言っています人々は 正しいことを 考えていることを 知っていますどうやって使用するのですか?私たちが 電話をするときにサムに話しているのかと思いました私は 時々 長い電話を 使っていますしかし 書くのを 使っていません実は サムライズの文字は 使っていますとても良いです翻訳のために 使っています学ぶために 使っています面白いですねいいですねちょっと待ってくださいジェネイン2つの質問ですテクノロジーが グーグルの終わりについてどう考えていますか?UBIについてもお聞きくださいテクノロジーが グーグルの終わりについて言うと 通常は間違っています人々は コウンターを 作ることができるのですその時に 自分の考えを 変えられるのです幻想外の論文話と ギャラリスト差別の経験私たちの個人の考えは その人ですユニークな問題ではなく長きく急かしい問題である何のほうが優しい場合かそれぞれ違った close化解 guidedGoogleは今、人々が使っているものですそしてGoogleは、このコードレッドのものでは変わりませんそういう状況だと私は思いますUBIは、私は良いことで重要だと思いますがそれほど不足していないことですそれは、このソリューションの一部です私は、それは素晴らしいことだと思いますAGIが、財政により多く参加していると考えて私は、財政や資金を多く分けられるべきだと考えていますそして、それが時間の中に重要になるでしょうしかし、私はそれが問題を解決することはできませんそれが人々に意味を与えることはありません人々は、全てのことをやりたくないと考えているでしょう新しいものを作り、他のものを作ることを考えます私は、それはテクノロジーを導入することを考えていますが社会のためのプランではありませんあなたの会社は、財政により多く参加するための会社ですか?あなたは、あなたが作るものを作るとするとすると予算を取る予定ですか?さらに、あなたは、社会にそれを戻し継続すると思いますか?それは運営者のために嗎?私たちはそれを行い hebbenすべての人に最適の資金を準備し働くつもりと思っていれば、あなたがそれを知っている私たちは、 investment私たちは、このサーフェ carryspeci 派 Jong十分な都合と安全しかし、私が確認する私は何があなたがの力を行っていることに何をしてもいいと言うことですAGIはそれぞれ違いますそのためにカップは非常に重要になります自分で質問したいことがありますUBIはソリューションの一部で私たちは青少年で仕事もあるので何を準備するべきか私の息子の先生は準備をしていたがもしかしたらアイデアを持っているか私はそれを学びました自信適応新しいことを早く学ぶクリエイティビティそれもアイデアが必要です私の子供たちにとって早く学べることはアイデアですGoogleが発表された前に私たちは事実を覚えていることが大切だった私たちは早く適応することができると思っていますサムを離れて2つの質問がありますかありがとう技術労働者の未来の仕事場は家からオフィスから何 % にすると思いますか私は人々が一つの答えを求めることはありません人々はフルリモートをする人々はハイブリッドをする私は毎週オフィスに行って家で一日一週間仕事をするYCはYCのパートナーだったみんながやること良いことを思っていた人も今は40年間ゾームでコンピューターを見ていると言っている人々もいる人々もいる最も難しいのは間違ったハイブリッドを持っていない人たち4日で来て家に会場の半分は画面のボックスで人がいない人は離れてしまうそれが一番難しいでもそれが進んでいく人々は何が欲しいか探している多くの大事な会社は人と会話するCBREは仕事をする?いいえ一人一緒にYCをやる彼もやるオープンAIの安全とその周りの話を聞いてみて自動車のようなAIの安全を考えている?新しいテクノロジーに安全の問題が多くあると思う特にAIの縦横の問題過去の数十年多くの技術進歩の7年か8年の安全エンジニアリング安全システムの管理を変更する間違いはあるがAGIの安全は個人的に別のカテゴリーだから安全の高さが考えやすいだから新しい安全のスタンダードを考えるのは最高の時期と言ったがなぜだろう何の会社がスマートフレンドが何かを始めるのか教えてもらえますか私の仲間の中で最も簡単なことはカピトルだと思う2015年から他のことは難しい人を働くのは難しい音が上がるのは難しい何かが大事だったのではない何千人もなかったスタートアップは上手くいったようだカピトルは安いのもなぜか価値を持たないようだカピトルを上げるのは難しいまだまだ後半は難しいでも他のことは簡単だ人は働きません音が上がるのは何か何かの価値を持たない音が上がるのは何か難しい今はAIを使っているかもしれないジャスパーについて興味深いことを思い出させてもらいましたあなたのAI語学モデルを依頼しているコピーの会社で今はAIが依頼されていると考えていますこの記事を読んでいませんが私が理解していると会社はGPT-3でこのモデルを依頼していると考えているそうでしょうか何か間違ったことがあるかもしれない間違ったことがあるかもしれないAIを作ることが間違いだと思いますAIを作ることが一番いいことは他の会社のように深い関係を作ること好きなクラスターとテクノロジーやアクティブエフェクトなどを作ることですオープンAIのモデルはもっと良くなると考えています私たちはプラットフォーム会社としてもっと自分たちを見つけますが私たちはプラットフォームとキャラアプを一緒に作りたいと考えていますJasperはこれを作ることができると考えています彼らは基本的な言語モデルの上に深い価値を持っていると考えています彼らが今何をしているか知っている人はいますか私は答えがあると思いますが一般的にもっと新しい価値が作られると考えていますこれが人たちが何をしているか止めることができると考えていますでも今iPhoneアプリストアの発売の前に作られていない価値が作られていると考えていますどうぞ


目次に戻る



...♦...