How Stanford Teaches AI-Powered Creativity in Just 13 MinutesㅣJeremy Utley
Stanford's Jeremy Utley reveals that "most people are not fully utilizing AI's potential." Why is that? He explains that it lies in how we approach AI. He said a simple mindset shift could be what you've been missing in the AI revolution.
Creativity is doing more than the first thing you think of
Think of LLM as a teammate and not just a tool. Provide it feedback! Let it ask you questions!
Key Insights:
📌How treating AI as a teammate rather than just a tool can dramatically improve outcomes
📌Why you should have AI ask you questions instead of just answering yours
📌How non-technical professionals can leverage AI to achieve extraordinary results
📌The difference between treating AI as a tool versus as a teammate
00:00 Intro
If you want to learn more about creativity using AI with Professor Jeremy, please refer to the link below!
👉 https://www.jeremyutley.design/ai-newsletter
222. Automating Processes with Software is HARD
We have decades of experience trying to automate processes. The biggest lesson is that automation is not about the easy and known flow, but about exception handling.
The best diagnosis for exception handling I can think of is to wait on line at the post office. If you’ve ever done that, you know the thought of “doesn’t anyone just want to mail a package” comes to mind. As it turns out the entire flow at the post office (or DMV or tax office) is about exception handling. No amount of software is going to get you out of there because it is piecing together a bunch of inputs and outputs that are outside the bounds of a system.
The ability to automate hinges not just on the ability to know the steps to take for predefined inputs, and not even the steps to take if some inputs are erroneous or incomplete, but what to do if you can’t even specify the inputs.
My favorite example of the latter is how the arrival of IBM computing in the 60s and 70s totally changed the definition of accounting, inventory control, and business operations. Every process that was "computerized" ultimately looked nothing at all like what was going on under those green eyeshades in accounting. Much of the early internet (and still most bank and insurance) look like HTML front ends to mainframe 3270 screens. Those might eventually change, just not quickly. It might be that the "legacy" or "installed base" of many processes is such that the cost to change is too monumental.
Stop Building AI Tools Backwards | Hazel Weakly
My favorite (evidence backed) theory on how humans learn is Retrieval Practice.
https://www.learningscientists.org/blog/2024/3/7/how-does-retrieval-improve-new-learning
Humans don’t really learn when we download info into our brain, we learn when we expend effort to pull that info out. This has some big implications for designing collaborative tooling!
The “thing” that we learn most effectively is not knowledge as we typically think of it, it’s process. This should be intuitive, if we put into a bit of a more natural context. Imaging learning baking for a moment: Do you teach someone to bake a cake by spitting out a fact sheet of ingredients and having them memorize it? Or do you teach them the process?
Sot GameTorch
How did *thinking* reasoning LLM's go from a github experiment 4 months ago, to every major company offering super advanced thinking models only 4 months later, that can iterate code, internally plan code, it seems a bit fast? Was it already developed by major companies, but unreleased? : MLQuestions
It was like a revelation when chain-of-thought AI became viral news as a GitHub project that supposedly competed with SOTA's with only 2 developers and some nifty prompting...
Did all the companies just jump on the bandwagon an weave it into GPT/ Gemini / Claude in a hurry?
Did those companies already have e.g. Gemini 2.5 PRO thinking in development 4 months ago and we didn't know?
Why the Coolest Job in Tech Might Actually Be in a Bank
For tech and AI talent, jobs at financial services companies are more desirable than they have ever been. Banks have been working hard to make it happen.
Personal Software: The Unbundling of the Programmer?
Why LLMs will transform development but not how you think
it's about how AI tools are enabling a new category of software that simply couldn't exist before.
When someone can describe their specific needs conversationally and receive working code in response, the economics of personal software development shift dramatically.
Think of it this way: just as spreadsheets enabled non-programmers to perform complex calculations and data analysis, AI-assisted development tools are enabling non-programmers to create personal software solutions.
Which AI to Use Now: An Updated Opinionated Guide
Picking your general-purpose AI
Also:
https://www.oneusefulthing.org/p/doing-stuff-with-ai-opinionated-midyear
Magic Color Picker
The Text2Color API allows you to convert text descriptions of colors in any language into their corresponding color codes. This API uses advanced language processing to interpret color descriptions and return accurate color representations in various formats including HEX, RGB and CMYK.
GraphRAG: The Most Incredible RAG Strategy Revealed
Today, we dive into the revolutionary Graph RAG from Microsoft, an advanced retrieval-augmented generation system that enhances AI responses by providing relevant context. GraphRAG: The Most Incredible RAG Strategy Revealed
📌 In this video, you will learn:
What is RAG (Retrieval-Augmented Generation)?
Differences between Basic RAG and Graph RAG
How to implement Graph RAG in your application
Step-by-step guide on setting up Graph RAG
Advantages of using Graph RAG over traditional methods
Losing the imitation game
AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.
- A computer can never be held accountable. Therefore, a computer must never make a management decision.
Programming as Theory Building
Non-trivial software changes over time. The requirements evolve, flaws need to be corrected, the world itself changes and violates assumptions we made in the past, or it just takes longer than one working session to finish. And all the while, that software is running in the real world. All of the design choices taken and not taken throughout development; all of the tradeoffs; all of the assumptions; all of the expected and unexpected situations the software encounters form a hugely complex system that includes both the software itself and the people building it. And that system is continuously changing.
To circle back to AI like ChatGPT, recall what it actually does and doesn't do. It doesn't know things. It doesn't learn, or understand, or reason about things. What it does is probabilistically generate text in response to a prompt.
14islands | The art of prompting: An introduction to Midjourney
A great deal of my learnings and inspiration comes from the great content from Yubin Ma at AiTuts, where you can learn more about prompting and view a myriad of examples.
Ask HN: Tutorial on LLM / already grasp neural nets | Hacker News
I've watched the 4 videos from 3blue1brown on neural nets. The web and youtube are awash with mediocre videos on Large Language Models. I'm looking for a good one.
This is part of a longer series but is maybe the single best video I know of on the topic:
https://youtu.be/kCc8FmEb1nY?si=zmBleKwlpV06O3Mw
I thought this video from Steven Wolfram was also quite good:
https://www.youtube.com/live/flXrLGPY3SU?si=SrP1EJFMPJqVCFPL
What are embeddings?
A deep-dive into machine learning embeddings.
How to Use AI to Do Stuff: An Opinionated Guide
Covering the state of play as of Summer, 2023
The Illustrated Stable Diffusion – Jay Alammar – Visualizing machine learning one concept at a time.
This is a gentle introduction to how Stable Diffusion works.
How to use AI to do practical stuff: A new guide
People often ask me how to use AI. Here's an overview with lots of links.
- The Six Large Language Models
- Write stuff
- Make Images
- Come up with ideas
- Make videos
- Coding
GPT-4: We Are in a Major Technological Change – Don Norman's JND.org
Yes, there has been much hype over the imagined powers and flaws of the new Large Language Models (e.g., Chat GPT-4), but the recent advances (that is, as of today in April 2023) indicate that ther…
https://arxiv.org/pdf/2303.12712.pdf
A talk by the lead author, Sebastian Bubeck, at MIT on March 22, 2023: Sparks of AGI: Early experiments with GPT-4.
https://www.youtube.com/watch?v=qbIk7-JPB2c
What Is ChatGPT Doing … and Why Does It Work?—Stephen Wolfram Writings
Stephen Wolfram explores the broader picture of what's going on inside ChatGPT and why it produces meaningful text. Discusses models, training neural nets, embeddings, tokens, transformers, language syntax.
Camera obscura: the case against AI in classrooms: Matthew Butterick
Research means more than fact-checking
When I first used GitHub Copilot, I said it “essentially tasks you with correcting a 12-year-old’s homework … I have no idea how this is preferable to just doing the homework yourself.” What I meant is that often, the focus of programming is not merely producing code that solves a problem. Rather, the code tends to be the side effect of a deeper process, which is to learn and understand enough about the problem to write the code. The authors of the famous MIT programming textbook Structure and Interpretation of Computer Programs call this virtuous cycle procedural epistemology. We could also call it by its less exotic name: research.