AlexNet
Krizhevsky, Sutskever, and Hinton's 2012 ImageNet victory proved deep learning could outperform hand-engineered computer vision. Paper #7.
Read moreNotes on building Chat Labs 1AI. Getting 250+ AI models to work together. What breaks, what works, what I learned.
Krizhevsky, Sutskever, and Hinton's 2012 ImageNet victory proved deep learning could outperform hand-engineered computer vision. Paper #7.
Read more
Vinyals, Fortunato, and Jaitly repurposed attention to point at input positions instead of blending hidden states. Paper #6.
Read more
Hinton and van Camp showed that penalizing weight complexity leads to better generalization. Paper #5.
Read more
Zaremba, Sutskever, and Vinyals figured out how to apply dropout to LSTMs without breaking them. Paper #4.
Read more
Christopher Olah's 2015 post explained LSTM gates with clarity that textbooks lacked. Paper #3.
Read more
Karpathy's famous 2015 post showed RNNs could generate Shakespeare and Linux code by predicting one character at a time. Paper #2.
Read more
Scott Aaronson's First Law of Complexodynamics explains why complexity rises, peaks, then falls. Paper #1 from Sutskever's 30.
Read more
Got tired of copy-pasting AI responses into Slack. Built shareable links instead. One good prompt becomes team knowledge.
Read more
Scraped 10,000 custom instructions to see patterns. Set your style once — every model remembers.
Read moreI write when I ship something or figure something out. Maybe twice a month. No tracking, no growth hacks.
One-click unsubscribe.