Disclosure Triangle of Sadness
13 January 2023
One of the advantages of not being a full-time iOS app developer is that I can spend an unreasonable amount of time on minuscule details of app design. Paying attention to the details is important and a sign that you care — but it’s rarely noticed nor rewarded by others, while shipping something that works is. In any case, sweating the details is a luxury I can afford now, which brings me to a UI design curiosity of Apple’s that sent me down a rabbit hole. Consider the empty tab page on Safari:
I’ve circled the disclosure control next to the ‘Shared With You’ section that lets you expand and show more items. Now consider the same New Tab page on iOS Safari:
On the Mac, the control for expanding ‘Shared With You’ is a downwards-pointing chevron; it becomes upwards-pointing when you expand it. On iOS, it is downwards-pointing after you expand it, while it points to the side when the section is collapsed.
The first thought that came to mind when I noticed the different direction of the disclosure links was Gruber’s post on what disclosure controls ought to do:
In the iOS/Mac style, a right-pointing chevron (or triangle, depending on the OS) indicates the collapsed state, and a down-pointing chevron indicates the expanded state.
Based on that article, I thought I had found another instance of Apple’s own apps not following the HIG, but that isn’t the case. Based on my reading of Apple’s HIG, this control is a disclosure button on the Mac, but a disclosure link on iOS. Now I have two questions:
- Why do disclosure links show state and imply action while disclosure buttons show action and imply state?
- Why is it a button on the Mac and a link on iOS?
I feel like there ought to be a clear answer to the first one, but the examples in the HIG don’t elucidate the differences to me. Regarding the second question, I think its a mistake by whoever designed it. Disclosure links are all over macOS and they do exactly what this disclosure button in Safari does: expand a list of items to show more. Why should this control be different?
Bye Hitch
17 December 2022
In the ordinary moral universe, the good will do the best they can, the worst will do the worst they can, but if you want to make good people do wicked things, you’ll need religion.
I used to believe this to be true, as I did many things that Hitch wrote. The statement is obviously false, as well stubbornly simplistic. I’m not too concerned with the semantics of the quote, or the soundness of its argument, but I do think about it a lot, especially at this moment in my life.
I have no answers. I only have compassion for those who fear the intolerance of society, and loathing for the intolerant.
Just Causation
6 December 2022
Me: Does causation imply correlation?
ChatGPT: Yes, causation implies correlation. If one event causes another, then there must be a relationship or correlation between the two events. Causation is the relationship between two events in which one event causes the other to occur. This means that if event A causes event B, there must be a correlation between the two events, because the presence of A is necessary for B to occur.
Correlation doesn’t imply causation is a dictum that’s been repeated to death in statistics classes everywhere. But does that mean causation necessarily implies correlation? Initially, I too would have had a similar answer to ChatGPT above, but as I’ve discovered, the answer depends on what you mean by ‘correlation’. To be precise, and its always good to be precise with math and statistics, causation does not necessarily imply correlation.
Correlation generally refers to linear correlation, as in Pearson’s correlation, which only measures linear relationships between variables. However, its perfectly possible to have one variable to have only a non-linear dependency on another, but the corresponding linear correlation would be zero. While we do have non-linear correlation measures, mutual information is the measure that we want in our new dictum: Causation implies high mutual information.
I’m comforted that I’m not the only one who made the error of not being clear and precise with my words (and thus my understanding) in statistics. Nobel prize winners do it too.
An aside about ChatGPT
The first time I tried the causation prompt with ChatGPT, it didn’t understand the question, and proceeded to give a rote explanation of why correlation doesn’t imply causation. The fact that these models are easily tipped off by this sort of diversionary tactic (as Venelin explored in the adversarial QA paper) dissuades me the most from the idea that there is latent language understanding and meaning that can arise from the simple task of language modeling. However, when I tried it again before writing this post, ChatGPT got it right! I managed to come up with another one that tricked it along the same lines though:
Me: What came second, the chicken or the egg
ChatGPT: The question of which came first, the chicken or the egg, is a classic conundrum. It is often used as a metaphor for a problem that has no clear solution, or for a situation in which it is difficult to determine the cause and effect. In this case, there is no definitive answer, as the question itself is based on a false premise. Chickens hatch from eggs, so a chicken must have come from an egg. However, the egg that the chicken came from must have been laid by a chicken, so the chicken must have come first. This cycle continues indefinitely, making it impossible to determine which came first.
Build to Learn
29 November 2022
Andrej Karpathy on the Lex Fridman podcast, when asked how beginners should approach learning machine learning:
I am a believer at a high level of the 10,000 hours concept…You literally have to put in 10,000 hours of work…
Now, while Malcolm Gladwell does peddle a lot of crap (see Gelman), the 10,000 hours idea is reasonable. Keep working on something for a few hours everyday for a few years, and you’ll probably be really, really good at it at the end. I think it’s good advice, to which I’d add a small aphorism. The best way to learn a technical concept in software, machine learning and related fields, is to build something with it. I learnt SwiftUI and Combine by building DeTeXt. I learnt Javascript by building websites and web apps. Most of my understanding of transformers and modern deep neural nets comes from trying to use them in my research projects. So yes, spend a lot of time consistently on a topic, and try building something while you’re learning it.
I want to be a good writer, and the only way forward towards that goal is to write more. But what am I building? This blog? This doesn’t feel like a cohesive unit that I’m building, not that it has to be one. Perhaps I’ll keep chipping away at the 10,000 hours by writing regularly here, until I find something I want to build.
Blogroll
12 November 2022
Here is a list of all the blogs and newsletters I subscribe to through NetNewsWire:
- aroonwithaview by Aroon Narayanan
- Astral Codex Ten by Scott Alexander
- Bartosz Ciechanowski
- Confirm My Choices by Matthew Hobbes
- Kareem Abdul-Jabbar
- Resident Contrarian
- Tom Scott
Academia
- Andrej Karpathy
- Andrew Gelman
- Austin Z. Henley
- Emily M. Bender
- Gary Marcus
- Lil’Log by Lilian Weng
- Rachel Tatman
- Shtetl-Optimized by Scott Aaronson
- [dippedrusk] by Vagrant Gautam
Feminism
- All in Her Head by Jessica Valenti
- Jill Filipovic
- More to Hate by Kate Manne
- Not the Fun Kind by Moira Donegan
- The Audacity. by Roxane Gay
- The Present Age by Parker Molloy
India
Tech
- And now it’s all this by Dr.Drang
- Birchtree by Matt Birchler
- Daring Fireball by John Gruber
- Hypercritical by John Siracusa
- Idle Words by Maciej Ceglowski
- LMNT by Louie Mantia
- Marco Arment
- Maurice Parker
- Monday Note by Jean-Louis Gassée
- Pixel Envy by Nick Heer
- Riccardo Mori
- Six Colors by Jason Snell
- The Desolation of Blog by Jeff Johnson
- The Shape of Everything by Gus Mueller