DeTeXt, a year later

6 November 2021

A year and 2 months ago, I released DeTeXt – my first iOS app that I had built using SwiftUI, Combine, CoreML and PencilKit. Thinking back, I believe I went from no-code to an app in the App Store in 6 weeks? I can hardly believe it now, but I am proud of building DeTeXt – I’m glad it helped some people find the LaTeX symbol they wanted and saved perhaps a few seconds of their day. Building and releasing DeTeXt emphasized one aspect of myself – I love building tools and working on projects that help other people be successful at their work, even in very small ways.

Initially, I had trained the MobileNetV2 neural network for DeTeXt’s symbol recognition engine on a 4 GPU cluster. Today, I did the same on my M1 Pro Macbook Pro running on battery, thanks to tensorflow-metal and the 16 core GPU. I wish I had documented the training better on the cluster, but I do have the training time for the M1 Pro: 6 minutes per epoch, for training a simple image classification neural net on over 200,000 images. In 1 epoch, the network reached an accuracy of 65%. Six minutes to train a complex neural network with over 7 million parameters, on battery power. Wow.


Thinking about attention again

20 October 2021

I’ve read, and re-read, CGP Grey’s blog post (a.k.a Project Cyclops) about wrestling back his focus and attention from online systems that feed on it. I’ve tried it a few times before, but this is the first time I’ve written about it publicly. So, effective immediately:

These activities are fine (but with some conditions):

Why am I doing this? For the same reason CGP Grey spelt out in his blog — I want to spend more time doing things I care about: research & study, reading, writing, cooking, building easy-to-use and delightful software tools and apps, and watching films.

Update

This is much harder than I thought it would be. I find myself replacing the time I would sink into the outlawed activities with others that seem ‘alright’ — Wikipedia, Letterboxd… There isn’t any magical solution, other than to be mindful with what I’m doing.


Two out of Five

17 May 2021

Phil Price over at Andrew Gelman’s blog had an interesting post about a certain unnamed principle that caught my attention:

Many years ago I saw an ad for a running shoe (maybe it was Reebok?) that said something like “At the New York Marathon, three of the five fastest runners were wearing our shoes.” I’m sure I’m not the first or last person to have realized that there’s more information there than it seems at first. For one thing, you can be sure that one of those three runners finished fifth: otherwise the ad would have said “three of the four fastest.” Also, it seems almost certain that the two fastest runners were not wearing the shoes, and indeed it probably wasn’t 1-3 or 2-3 either: “The two fastest” and “two of the three fastest” both seem better than “three of the top five.” The principle here is that if you’re trying to make the result sound as impressive as possible, an unintended consequence is that you’re revealing the upper limit.

My first thought was naturally that this sentence is an instance of a scalar implicature. The unintended consequence is a scalar implicature that arises because of a violation of the maxim of quantity. Reebok doesn’t want to say “The third, fourth and fifth fastest runners of the marathon were wearing our shoes” (in the worst case), so they choose to say this instead. We, as Gricean listeners are licensed to derive the hidden inference because of the form of the utterance.

But does Reebok want us to make the inference that one of those three runners finished fifth? Of course not. The construction “three of the five fastest runners” is interesting for this reason – it doesn’t hide the inference, but it weakens it. Consider the alternatives:

  1. Some of the five fastest runners were wearing our shoes.
  2. Three of the fastest runners were wearing our shoes.
  3. Three of the five fastest runners were wearing our shoes.

(3) seems ideal – it’s technically the truth, and it sounds more impressive than (1) and (2). But I can’t shake the feeling that I’m missing something obvious in my thinking. For one, the partitive construction “Three of the five” should lead to a probabilistically stronger inference as Judith Degen has shown in her work. Also (3) sounds more impressive than (2) right? But why is the inference in (3) non-obvious on a first reading of the marketing slogan? Is there an effect due to the usage of the numeral determiners three and five? Assuming the truth of the statement, is Forty of the fifty fastest runners were wearing our shoes as impressive as (3)?

I’ll need to see what work has been done on these constructions, but sentences of this form seem to illustrate Degen’s non-homogeneity assertion: not all scalar implicatures have the same strength.

Update

Andrew Gelman makes an interesting point in reply to my question of why Reebok would use this phrasing when it could lead to a weaker implicature: people know that something is up but we’re still taken in, similar to how we’re more likely to buy something if its ₹99 rather than ₹100.


Anti-authoritarian

20 April 2021

Vaibhav Vats in Caravan magazine, on the pathetic orchestrated tweets by Indian celebrities following Rihanna’s tweet bringing attention to the farmer’s protests:

Tendulkar became the epitome of values prized in the conventional, hierarchal and self-congratulatory milieu of the middle class, showing no eagerness to challenge the many prejudices of society and state. His notion of ethics remained limited purely to the realm of his own personal conduct … this personal decency has always been accompanied by a deeply ingrained timidity towards authority, a primal fear of upsetting any establishment, whether cricketing or otherwise.

This comment on the middle class resonated with me: respect authority, don’t step out of line, don’t cause trouble, etc. Why risk being anti-authoritarian when you can maintain your place in the social hierarchy by being obedient to authority?


Money Laundering for Bias

19 December 2020

It’s terrifying to choose an area of study for your PhD research – what if it turns out to be a dead end in a year? That thought is inescapable as I start (in earnest) studying the Linguistic Intergroup Bias.

At this critical juncture in my research (and life), I’m glad I found this talk by Maciej Ceglowski. It might seem overtly simplistic, but I appreciate its sentiment, and find it useful to keep in mind when studying bias from a computational perspective:

Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It’s a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don’t lie.