• “There’s a long way to go, and a lot of compute to build out between here and AGI . . . training expenses are just huge,” says Sam Altman about seeking more funding from Microsoft – he confirmed that OpenAI is working on GPT-5, said that OpenAI’s true single product is intelligence, and that the biggest challenge is to figure out how to generate new knowledge. - Financial Times
• “Would anyone still want to get an education when an agent has all the answers?” asks Bill Gates in his latest essay on why AI agents will end software as we currently know it, and the challenges that need solving before this becomes a reality. - GatesNotes
• “2024 is going to see a significant shift in the narrative from purely cloud-based inference of AI and LLMs to local on-device processing” – Macs do pretty well with running LLMs locally, opening up a wide avenue for the transition from cloud to on-device AIs. - Creative Strategies
• The hallucination leaderboard paints an optimistic picture that we’re close to 100% accurate LLMs – GPT-4 scored 97% accuracy, Google’s Palm had the worst score of 87.9%, however the evaluation task was only summarizing a document so take it with a grain of salt. - GitHub
• As an example of what GPTs can do, here’s one that converts files into different formats (ChatGPT Plus required) – there are lots of GPTs out, but the store isn’t ready so it’s hard to find them, lists on GitHub or GPTsCatalog serve as an alternative for now. - ConvertAnything, GitHub, GPTsCatalog