Stop Building AI Tools Backwards | Hazel Weakly
My favorite (evidence backed) theory on how humans learn is Retrieval Practice.
https://www.learningscientists.org/blog/2024/3/7/how-does-retrieval-improve-new-learning
Humans don’t really learn when we download info into our brain, we learn when we expend effort to pull that info out. This has some big implications for designing collaborative tooling!
The “thing” that we learn most effectively is not knowledge as we typically think of it, it’s process. This should be intuitive, if we put into a bit of a more natural context. Imaging learning baking for a moment: Do you teach someone to bake a cake by spitting out a fact sheet of ingredients and having them memorize it? Or do you teach them the process?
Sot GameTorch
How did *thinking* reasoning LLM's go from a github experiment 4 months ago, to every major company offering super advanced thinking models only 4 months later, that can iterate code, internally plan code, it seems a bit fast? Was it already developed by major companies, but unreleased? : MLQuestions
It was like a revelation when chain-of-thought AI became viral news as a GitHub project that supposedly competed with SOTA's with only 2 developers and some nifty prompting...
Did all the companies just jump on the bandwagon an weave it into GPT/ Gemini / Claude in a hurry?
Did those companies already have e.g. Gemini 2.5 PRO thinking in development 4 months ago and we didn't know?
Why the Coolest Job in Tech Might Actually Be in a Bank
For tech and AI talent, jobs at financial services companies are more desirable than they have ever been. Banks have been working hard to make it happen.
Personal Software: The Unbundling of the Programmer?
Why LLMs will transform development but not how you think
it's about how AI tools are enabling a new category of software that simply couldn't exist before.
When someone can describe their specific needs conversationally and receive working code in response, the economics of personal software development shift dramatically.
Think of it this way: just as spreadsheets enabled non-programmers to perform complex calculations and data analysis, AI-assisted development tools are enabling non-programmers to create personal software solutions.
Which AI to Use Now: An Updated Opinionated Guide
Picking your general-purpose AI
Also:
https://www.oneusefulthing.org/p/doing-stuff-with-ai-opinionated-midyear
Magic Color Picker
The Text2Color API allows you to convert text descriptions of colors in any language into their corresponding color codes. This API uses advanced language processing to interpret color descriptions and return accurate color representations in various formats including HEX, RGB and CMYK.
GraphRAG: The Most Incredible RAG Strategy Revealed
Today, we dive into the revolutionary Graph RAG from Microsoft, an advanced retrieval-augmented generation system that enhances AI responses by providing relevant context. GraphRAG: The Most Incredible RAG Strategy Revealed
📌 In this video, you will learn:
What is RAG (Retrieval-Augmented Generation)?
Differences between Basic RAG and Graph RAG
How to implement Graph RAG in your application
Step-by-step guide on setting up Graph RAG
Advantages of using Graph RAG over traditional methods
Losing the imitation game
AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.
- A computer can never be held accountable. Therefore, a computer must never make a management decision.
Programming as Theory Building
Non-trivial software changes over time. The requirements evolve, flaws need to be corrected, the world itself changes and violates assumptions we made in the past, or it just takes longer than one working session to finish. And all the while, that software is running in the real world. All of the design choices taken and not taken throughout development; all of the tradeoffs; all of the assumptions; all of the expected and unexpected situations the software encounters form a hugely complex system that includes both the software itself and the people building it. And that system is continuously changing.
To circle back to AI like ChatGPT, recall what it actually does and doesn't do. It doesn't know things. It doesn't learn, or understand, or reason about things. What it does is probabilistically generate text in response to a prompt.
14islands | The art of prompting: An introduction to Midjourney
A great deal of my learnings and inspiration comes from the great content from Yubin Ma at AiTuts, where you can learn more about prompting and view a myriad of examples.
Ask HN: Tutorial on LLM / already grasp neural nets | Hacker News
I've watched the 4 videos from 3blue1brown on neural nets. The web and youtube are awash with mediocre videos on Large Language Models. I'm looking for a good one.
This is part of a longer series but is maybe the single best video I know of on the topic:
https://youtu.be/kCc8FmEb1nY?si=zmBleKwlpV06O3Mw
I thought this video from Steven Wolfram was also quite good:
https://www.youtube.com/live/flXrLGPY3SU?si=SrP1EJFMPJqVCFPL
What are embeddings?
A deep-dive into machine learning embeddings.
How to Use AI to Do Stuff: An Opinionated Guide
Covering the state of play as of Summer, 2023
The Illustrated Stable Diffusion – Jay Alammar – Visualizing machine learning one concept at a time.
This is a gentle introduction to how Stable Diffusion works.
How to use AI to do practical stuff: A new guide
People often ask me how to use AI. Here's an overview with lots of links.
- The Six Large Language Models
- Write stuff
- Make Images
- Come up with ideas
- Make videos
- Coding
GPT-4: We Are in a Major Technological Change – Don Norman's JND.org
Yes, there has been much hype over the imagined powers and flaws of the new Large Language Models (e.g., Chat GPT-4), but the recent advances (that is, as of today in April 2023) indicate that ther…
https://arxiv.org/pdf/2303.12712.pdf
A talk by the lead author, Sebastian Bubeck, at MIT on March 22, 2023: Sparks of AGI: Early experiments with GPT-4.
https://www.youtube.com/watch?v=qbIk7-JPB2c
What Is ChatGPT Doing … and Why Does It Work?—Stephen Wolfram Writings
Stephen Wolfram explores the broader picture of what's going on inside ChatGPT and why it produces meaningful text. Discusses models, training neural nets, embeddings, tokens, transformers, language syntax.
Camera obscura: the case against AI in classrooms: Matthew Butterick
Research means more than fact-checking
When I first used GitHub Copilot, I said it “essentially tasks you with correcting a 12-year-old’s homework … I have no idea how this is preferable to just doing the homework yourself.” What I meant is that often, the focus of programming is not merely producing code that solves a problem. Rather, the code tends to be the side effect of a deeper process, which is to learn and understand enough about the problem to write the code. The authors of the famous MIT programming textbook Structure and Interpretation of Computer Programs call this virtuous cycle procedural epistemology. We could also call it by its less exotic name: research.
Chat with Document(s) using OpenAI ChatGPT API and Text Embedding
The short answer is that they convert documents that are over 100 or even 1,000 pages long into a numeric representation of data and related context (vector embedding) and store them in a vector search engine.
- Extending ChatGPT With LlamaIndex (GPT Index)
LlamaIndex, also known as the GPT Index, is a project that provides a central interface to connect your LLMs with external data. Yeah, you read that correctly. With LlamaIndex, we can build something that looks like the illustration below: