Followups: The man in the machine and pure AI content production

Anthony Bardaro
Annotote TLDR
7 min readOct 5, 2022

--

The following highlights provided by Annotote: Don’t waste time and attention, get straight to the point. All signal. No noise.

The Roaring 20s: A future-proof economy and the business of adapting to automation/AI/ML

by Anthony Bardaro (Adventures in Consumer Technology) 2020.01.06

BingGPT told Google to shut up and dance with me: Traditional search, chat-based search, and the actual applications of LLMs

by Anthony Bardaro (Adventures in Consumer Technology) 2023.03.02

Facebook Meta open-sources its free AITemplate (AIT) that accelerates code running on any GPU chip by 4x–12x the speed

by Ben Thompson (Stratechery) 2022.10.05

“a new set of free software tools for artificial intelligence applications that could make it easier for developers to switch back and forth between different underlying chips. Meta’s new open-source AI platform is based on an open-source machine learning framework called PyTorch, and can help code run up to 12 times faster [than classic PyTorch] on Nvidia Corp’s (NVDA) flagship A100 chip or up to four times faster on Advanced Micro Devices Inc’s (AMD) MI250 chip[.]”

Meta’s motivation for creating this tool is straightforward: the company is one of the largest users of AI and thus, by extension, biggest consumers of Nvidia GPUs in the world; this tool will enable the company to more easily set Nvidia and AMD in competition against each other in terms of providing chips for inference, because the company won’t need to rewrite its software based on whoever offers the best performance per dollar (or per watt). Moreover, it seems certain that the company’s in-development AI chip will be optimized for AITemplate.

Still, at the end of the day, Meta is only one Nvidia customer, and Nvidia could alter CUDA and its chips to make AITemplate less effective. That is where open-sourcing AITemplate makes sense: to the extent AITemplate becomes the standard library for inference — and why wouldn’t companies want to adopt it, given they have the same desire to escape from Nvidia’s pricing power and lock-in? […]

There are some downsides for Meta, including making it easier for companies to compete with Meta’s AI; perhaps Meta has had the same sort of realization… that AI is actually going to be much more decentralized than previously thought, and it was better to leverage that decentralization to, in the long run, drive its own costs lower.

As for Nvidia, time will tell how much traction this gets, but it strikes me as a big blow that not only does this tool exist but that it could be so much more performant than Nvidia’s own implementation. That makes it that much more likely AITemplate gets traction: not only are there long-term reasons to favor it, but short-term ones as well.

A podcast that is entirely generated by artificial intelligence where Joe Rogan interviews Steve Jobs

via Twitter (Alex MacCaw @maccaw) 2022.10.11

As OpenAI’s cloud and commercialization partner, Microsoft integrated DALL-E 2 into its Designer app, Bing search engine, Edge browser, and GitHub Copilot

by Liberty RPF 2022.10.19

the integration of generative AI via APIs into a few products:

“Microsoft is making a major investment in DALL-E 2, OpenAI’s AI-powered system that generates images from text, by bringing it to first-party apps and services. [including] newly announced Microsoft Designer app and Image Creator tool in Bing and Microsoft Edge…

“Microsoft is the exclusive provider of cloud computing services to OpenAI and is OpenAI’s preferred partner for commercializing new AI technologies [like] GitHub Copilot[.]”

…this kind of integration is truly bringing this tech to the masses[.]

[Github] Copilot acts as a kind of advanced auto-complete for computer code…

“Copilot is now handling up to 40% of coding among programmers using the AI in the beta testing period over the past year… for every 100 lines of code, 40 are being written by the AI, with total project time cut by up to 55%”…

“The GitHub CEO expects more of those Copilot code suggestions to be taken — in the next five years, up to 80%.”

AI-generated art sparks backlash from Japan’s anime community

by NiemanLab 2022.11.01

renowned South Korean illustrator Kim Jung Gi passed away unexpectedly at the age of 47. He was beloved for his innovative ink-and-brushwork style of manhwa, or Korean comic-book art, and famous for captivating audiences by live-drawing huge, intricate scenes from memory. Just days afterward, a former French game developer, known online as 5you, fed Jung Gi’s work into an AI model. He shared the model on Twitter as an homage to the artist, allowing any user to create Jung Gi-style art with a simple text prompt… The response was pure disdain…

While there’s a long-established culture of creating fan art from copyrighted manga and anime, many are drawing a line in the sand where AI creates a similar artwork… rooted in the intense loyalty of anime and manga circles — and, in Japan, the lenient laws on copyright and data-scraping…

Microsoft plans to develop GitHub Copilot generative AI for other job categories, like security and video game design

by Bloomberg (2022.11.01)

Amazon announces Create with Alexa, a generative AI that lets children create animated stories via voice prompts on three topics, available on Echo Show devices

by Engadget (2022.11.29)

chatGPT, the Google Search aggregator, and the vertical search/chat platform opportunity

by Anthony Bardaro (@anthpb via Twitter) 2022.12.09

Y’all way too confident that chatGPT is going to eat Google’s lunch. Google Assistant is a thing, used by 500M DAUs on 1B devices, with pretty accurate/authoritative answers (not 10 blue links/ads) that carries on convos. The disruption of Google Search v2022 is almost a guarantee, but y’all just tripping over each other to be the first to call chatGPT that disruptor — like Google has no AI assets and impossibility sacred cows. Maybe start with a probability and a confidence level that aren’t each 100%…

Also improbable, but [I’m] more confident [that] chatGPT, et al can disrupt closed search like elasticsearch $estc and $yext Answers: a platform API connects an AI trained on the open web to private/internal systems, so orgs can train the model on their own, proprietary, supplemental data…

Beyond search, there’s unmet need for an independent platform to support 3rd party voice assistants, chatbots, etc — Amazon Alexa SDK, Google Assistant SDK, Nuance [Microsoft], etc are barely useful and hardly Switzerland.

Google and Facebook’s responses to Open AI’s product launches

by Anthony Bardaro (@anthpb via Twitter) 2022.12.30

Google’s response to ChatGPT is PaLM (Pathways Language Model), which has over 3x the parameters…

Facebook’s Deep Learning Recommendation Model (DLRM)… uses 12 trillion parameters vs GPT-3 at 175 billion, resulting in 40x increase in speed.

The third magic: A meditation on history, science, and artificial intelligence

by Noah Smith (Noahpinion) 2023.01.01

In 2001, the statistician Leo Breiman wrote an essay called “Statistical Modeling: The Two Cultures”, in which he described an emerging split between statisticians who were interested in making “parsimonious models” of the phenomena they modeled, and others who were more interested in “predictive accuracy”. He demonstrated that in a number of domains, what he calls “algorithmic” models (early machine learning techniques) were yielding consistently better predictions than what he calls “data models”, even though the former were far less easy, or even impossible, to interpret.

This raises an important question: What is the goal of human knowledge? As I see it — and as Breiman sees it — the fundamental objective is not understanding but control. By recording which crops grow in which season, we can feed our families… In these situations, knowledge and understanding might be intrinsically satisfying to our curiosity, but that satisfaction ultimately pales in importance to our ability to reshape our world to our benefit. And the “algorithmic” learning models that Breiman talks about were better able to deliver their users the power to reshape the world, even if they offered less promise of understanding what they were predicting…

In 2009 — just before the deep learning revolution really kicked off — the Google researchers Alon Halevy, Peter Norvig, and Fernando Pereira wrote an essay called “The Unreasonable Effectiveness of Data” that picked up the argument where Breiman left off. They argued that in the cases of natural language processing and machine translation, applying large amounts of data was effective even in the absence of simple generalizable laws…

Anyway, the basic idea here is that many complex phenomena like language have underlying regularities that are difficult to summarize but which are still possible to generalize. If you have enough data, you can create a model (or, if you prefer, an “AI”)…

The ability to write down farming techniques is power. The ability to calculate the path of artillery shells is power… even if we don’t really understand the principles of how it’s doing what it does. This power is hardly limited to natural language processing and chatbots. In recent years, Google’s AlphaFold algorithm has outpaced traditional scientific methods in predicting the shapes of folded proteins…

[I]nstead of spending our effort on a neverending (and probably fruitless) quest to make AI fully interpretable, I think we should recognize that science is only one possible tool for predicting and controlling the world. Compared to science, black-box prediction has both strengths and weaknesses.

One weakness — the downside of being “unscientific” — is that without simple laws, it’s harder to anticipate when the power of AI will fail us. Our lack of knowledge about AI’s internal workings means that we’re always in danger of overfitting and edge cases. In other words, the “third magic” may be more like actual magic than the previous two [i.e. recorded history/recordkeeping and science] — AI may always be powerful yet ineffable, performing frequent wonders, but prone to failure at fundamentally unpredictable times.

Editors at some literary magazines say they are getting overwhelmed by AI-generated submissions, potentially crowding out genuine submissions from newer writers

by The Verge 2023.02.27

The first knowledge network

Annotote gives you highlights of everything you need to read and lets you annotate anything you want to save or share. Check out the most frictionless way to get informed or inform others:

Don’t waste time and attention: Annotote. All signal. No noise.

--

--

Anthony Bardaro
Annotote TLDR

“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away...” 👉 http://annotote.launchrock.com #NIA #DYODD