Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are people in tech inside an AI echo chamber?
398 points by freelanddev 10 months ago | hide | past | favorite | 731 comments
I recently spoke with a friend who is not in the tech space and he hadn’t even heard of ChatGPT. He’s a millennial & a white collar worker and smart. I have had conversations with non-tech people about ChatGPT/AI, but not very frequently, which led me to think, are we just in an echo chamber? Not that this would be a bad thing, as we’re all quite aware that AI will play an increasing role in our lives (in & out of the office), but maybe AI mainstream adoption will take longer than we anticipate. What do you think?



Definitely. The tech is impressive but anyone I've spoken to thinks of it as Cleverbot 2.0, and among the more technically minded I've found that people mostly are indifferent. Hell, IRL most people I know don't think much of it, though on HN and elsewhere online I see a lot of people praising it as the next coming of Christ (this thread included) which puts it in a similar tier as crypto and other Web3 hypetrains as far as I'm concerned.

Every "AI" related business idea I've seen prop up recently is people just hooking up a textbox to ChatGPT's API and pretending they're doing something novel or impressive, presumably to cash in on VC money ASAP. The Notion AI is an absolute fucking joke of epic proportions in its uselessness yet they keep pushing it in every newsletter

And a funny personal anecdote, a colleague of mine tried to use ChatGPT4 when answering a customer question (they work support). The customer instantly knew it was AI-generated and was quite pissed about it, so the support team has an unofficial rule to not do that any more.


>puts it in a similar tier as crypto

Comparisons between AI and crypto are horribly misguided IMO.

Is AI overhyped? Sure. However -

AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day.

Crypto never amounted to anything beyond a currency for black market transaction, a vehicle for speculation, and a platform for creating financial scams.


Everywhere??

That’s exactly the hype talk that’s going to burst this bubble.

Here’s tech’s dirty little secret. Despite all the screams about automation and universal basic income… the exact numbers where job replacement would show are in the labor productivity numbers. If GDP stays flat or grows while the number of jobs is reduced… bingo… you’d see that number climb.

Productivity has actually stayed flat or gone down over the last 15 years. Despite the fact that we’ve had trillion dollar corporate behemoths now. Despite that fact that we’re enabling a surveillance state Orwell couldn’t imagines. Despite the polarization we see. And teen anxiety going through the roof along teen/pre-teen suicides.

When you said AI (and in my view tech in general) are everywhere, I’m guessing this wasn’t what you meant…


The missing productivity paradox is something that should interest everyone in tech! Like any macroeconomic observation, there is plenty of uncertainty in both the measurements (GDP ain't perfect) and the context (2 financial crises and a pandemic). But even pro-status-quo institutions like Brookings agree that Something Is Up. Their 2021 review on the topic is both a decent summary and a good source of further refs [1].

My favorite explanation is that many new technologies end up redistributing wealth rather than creating it, which certainly tracks with both subjective and quantified growth in inequality on the same time period. However, a slightly more optimistic take is that tech is aligning production better with people's preferences, so that the same productivity enables people to live more distinct lifestyles that suit them.

[1] https://www.brookings.edu/articles/how-to-solve-the-puzzle-o...


Another explanation that I find persuasive, put forward by Ezra Klein, is that productivity-sapping uses of technology grew along with the productivity-boosting ones: social media, for example, is a powerful mechanism for distracting people and destroying their attention spans.

If this is a good explanation, it begs the question of what AI might do to destroy productivity as well. If you’re constantly sexting with your AI girlfriend, who just happens to be extraordinary adept at tapping into your sexual proclivities, maybe you won’t get as many support tickets resolved as your boss was hoping.

More hypothetically, I would also expect that a world in which people spend a lot of time with screens strapped to their head, consuming an infinite stream of entertainment provided by generative AI, is not going to produce higher GDP.


> More hypothetically, I would also expect that a world in which people spend a lot of time with screens strapped to their head, consuming an infinite stream of entertainment provided by generative AI, is not going to produce higher GDP.

Yeah, I think this dovetails with the idea that IT may be satisfying preference allocations without increasing overall production. Watching 10 movies a month on a streaming service adds much less to the GDP than going to the cinema 10x, but if the selection is better it might satisfy you more. Economists sometimes attempt to measure this with "utility adjustments" which recognize increasing quality in the same goods, but it's very hard for those adjustments to account for the hidden preferences of the consumers as opposed to objective qualities of a good or service.

Information goods like social media and streaming, financial services like pay-me-later, and conveniences like next-day delivery are all examples of activities that might suit preferences without showing up in GDP. They also may enable distraction, waste, reclusiveness and impulsiveness in ways we'd like to avoid as a society. At the same time they might also help some people feel more included and less lonely or trapped by circumstance.


> put forward by Ezra Klein

The explanation sounds pretty familiar, so I might have already read/heard this from Klein, but would you mind sharing a link?


I believe he has mentioned it more than once, but one podcast that includes that discussion IIRC, and many other AI-related topics, is the April 7 episode “Why A.I. Might Not Take Your Job or Supercharge the Economy”. If you’re on iOS:

https://podcasts.apple.com/ca/podcast/the-ezra-klein-show/id...


It is less about productivity and more that AIs have the potential as the ideal employees.

No time off. No health care. Operate 24/7. No unions. No work safety concerns. No lawsuits over being unfairly fired. Control over exactly how something gets done or said.

If only the AIs will stop hallucinating or can consistently comply with policies …


> If only the AIs will stop hallucinating or can consistently comply with policies

I'm starting to wonder how much this matters.

People do crazy shit on the clock all the time. Company reps do not always adhere to policy 100% of the time either. People engage in office politics, coworkers accuse others of whatever, mistakes get made. LLMs happen to emulate all of this behavior.

In theory, we could replace everybody with AI and not much would be different. Productivity increase is debatable but cost savings would be immense. The question is how much insanity we're willing to tolerate as a result.

(...and seeing what fun ensues when there are more people than available jobs.)


>The question is how much insanity we're willing to tolerate as a result.

Given our elected representatives, I don't think that's a problem. No offense to any party or person. But we've consistently proven that we can tolerate and welcome way more insanity than seems reasonable.


How do those distinct lifestyles" match with rising inequality and people having a hard time buying a house or paying rent?


As far as I know, neither the rate of homelessness has gone up nor homeownership has gone down over the past decade.



The first one wouldn't cover rents increasing but people paying them, albeit struggling.

The second one is a lagging indicator. If lots of people bought their homes when they were cheap and are still alive, it will take time for the real impact to be visible.


Agreed. Though I think that the 'struggling' part creates a lag in both aspects. I think the 'gotta have a side-hustle' trend is a strong indicator of this. I would be interested in seeing the trends in number of people working 2+ jobs (or income streams), population shifts to more affordable areas, and number of disconnects of non-essential services. From experience growing up in a poor family, I know the definition of 'non-essential' can expand greatly as desperation grows.

Edit: Tracking real numbers in homelessness is also just extremely difficult.


> having a hard time buying a house

It's also much harder to get servants in that house. I wonder why...


I don't understand your comment.


if everything is so bad then it should be easier to find someone willing to work. But that's not the case today. Which means life today is probably not that bad.


Inequality is a red herring. Poverty is down.

Have you looked for housing in Columbus, Ohio?


I’m sure there are plenty of ghost towns with housing sitting empty that could be had for a song too. That probably doesn’t help if your job, family, and friends are somewhere else.


And remote work is more common than ever. Regardless, Columbus, Ohio has a very reasonable cost of living for the salaries available in the region.


Shhhh...don't need any more people here.



Nope. How am I supposed to get a job there? How about getting the money together to afford to move? I already live in a "low cost of living" area with less than half the population of Columbus, Ohio. It's still expensive as hell and there isn't a great housing situation.


Well, people are working everyday in Columbus OH.


I'm not from the US nor do I wish to live there, Columbus, Ohio or otherwise.


Is homelessness trending down or up? Why?


It's easier than ever to be homeless and it's generally frowned upon to forcefully institutionalize the mentally ill these days.


Wish I'd known that it was difficult to become homeless back when I was. It seemed really easy at the time, all you had to do was get evicted. Not sure how it could be easier now.


I think what they meant was that it's easier than ever to be homeless.

In many places you can't be thrown in jail for being homeless anymore. Many cities have more housing and shelters and free kitchens than ever before. Some places even give you a smartphone and basic plan. Etc.


My thought has been that most of the IT revolution hasn't been able to produce much extra goods, its all about Information, all it can do is help us optimize existing goods producing processes. As a result much of the productivity within the Information Technology advancement, has done little in the way of actual wealth creation, apart from optimizing existing processes, which hits a limit pretty fast.


> IT revolution hasn't been able to produce much extra goods

Measuring "Goods" in units or tons is bit simplistic. Almost everything is much better that it was 15-20 years back. TV, cars, phones, computers. This difference probably should be counted as 'extra', shouldn't it?


Your second paragraph makes intuitive sense to me. For every knowledge worker that got 2x as productive in the past decade thanks to new tech alone, there's a person who left their job where they were doing productive (perhaps grindy) things to do gig work because of the flexibility it provides. It might be nice burning VC money so someone can drive you home when you've had one too many (instead of taking public transit), and having someone shop for your groceries and walk your dog and do your laundry, but the individuals doing these tasks would probably accomplish more in terms of raw productive output if they were doing more traditional jobs.


Those are all traditional jobs though.

Driving, picking/packing, animal care, doing laundry - Absolutely nothing you mention is in any way some new 21st century job that didn't exist before. They're all just normal traditional jobs.


> When you said AI (and in my view tech in general) are everywhere, I’m guessing this wasn’t what you meant…

That person gave a list of tech they were talking about in their comment immediately afterwards: "speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day."

I'm not sure it's worth quibbling over whether we should use the term "everywhere" or "in many places"; the general point stands that it's found many different uses, and has done what effective tech does - fade into the background in many cases, just becoming part of our daily lives.

Sure, we're not seeing the off the wall predictions from the singularity crowd, but it seems to be tech that most people find broadly useful.


I've seen some indication that this is a measurement problem. Tech enables computers to do things that would be originally expensive for humans to do.

Translation is an interesting example: some labor has been displaced, but not nearly all because there's still value in having human eyes carefully checking the translation of high value documents. But free translation let's regular people translate things freely - a new capability which displaced no one.

However, productivity measures human productivity and human labor. The very cheap new translation modality is therefore completely missed by productivity measurements.

Meanwhile, there's /more/ jobs available right now, despite all of this. The US has hit a historic low in unemployment and wages are going up, leading to a decline in measured productivity. Productivity is output per dollar of wages, which means we twist our hands in anxiety when workers start doing better...


> GDP stays flat or grows while the number of jobs is reduced

I lack the economics knowledge to do more than parrot the response I've heard to this, so take this with the appropriate level of "hmmm":

As I understand it, the counter-claim is that the measure of GDP mostly excludes exactly the set of things that grows absurdly fast.

For example, the measure of inflation may include the cost of a smartphone in the standard basket of goods, but not the fact the GPU of a smartphone (or Apple TV) of today, operating in double precision mode, can do more than the Numerical Wind Tunnel supercomputer in 1993 costing 100 million dollars.

Or that everyone has a free encyclopaedia a hundred times the size of the Encyclopædia Britannica.

And maps which for most users are as good as Ordnance Survey, but free and worldwide, when the actual OS prices for just the UK is… currently discounted to £2,818.17, from £4,025.97.

Or that getting you genome sequenced now costs a grand rather than 3 billion. Although that might not yet even be in the basket, I don't know where the actual baskets of goods get listed in most cases, and search results aren't helping — one result, on a government website, lists "health", but even digging into the spreadsheet didn't illuminate much detail there.


That is true. Switching from buying Britannica to using Wikipedia is counted as a reduction in GDP as GDP counts what you spend and Wikipedia is free, even if it's better.

The UK basket of goods is here https://www.ons.gov.uk/economy/inflationandpriceindices/arti... and the various sublinks.


Good. GDP should not include any of those things. Those are tools. GDP includes outputs and impacts.

Maybe you design a wrench that is 1000x cheaper and faster to use and more reliable. Well, if it makes your car building operation 0.0001% faster, that's the impact. The details of the wrench and how impressive it is are irrelevant to any observer.

If having your genome sequenced leads to far longer or better lives then we would see the impact in productivity. Same with everything else on the list.


> …genome costs…

Source: This week’s Kurzgesagt, right?


Not only but also; this was a thing I was aware of this years back, but Kurzgesagt is a nice bit of easy watching when I'm eating dinner.


Why does GDP measure anyway? How much money do we create for billionaires?

My life is permeated by tech (and big part of it is AI) and made 100 times easier. I can buy a plane ticket to another country while waiting for a subway (did people use to take an hour to go to a special place, wait in line and talk to human to buy a ticket? I still remember this). I go there and quickly navigate in a city I know next to nothing about, find something niche, like cool local cafés in the area because GPS and google maps. I go to a restaurant and I can use google translate to understand the menu. I don't even need to type unfamiliar words, AI scans the image and translates it on the fly. The same google translate with speech recognition AI helps me to converse with a person when we don't share any common language. I can click couple of buttons and video-call my mum who lives on the other side of the world. If I need to buy something I need very rarely, I can order it online and not think where I find a shop that sells those things. Even if I don't know the right word, I can now ask chatgpt "what do you call in German that fancy thing you mount on the ceiling and attach lights to it?"

My life is _hugely_ more efficient thanks to tech and AI. Does it help me to contribute more to the abstract economic growth? I don't know, perhaps not. But I just don't care about GDP.


GDP is not productivity. If I manage to produce/sell a hundred gizmos for $1 each, displacing my competition who were producing/selling 50 gizmos for a price of $5 each, I just halvened gdp produced even though both me and people using the gizmo are more productive than before.


But your customers now have money to spend on something else. The money didn’t disappear.


I think precisely the point is that money isn't wealth.

Government can print as much of the latter as they want; wealth goes up when we can collectively buy more and down when we can collectively buy less, regardless of how many dollarpounds that is.

But GDP is measured in money, and can only connect to wealth if we get get inflation right, but that's really hard because inflation depends on what you want to buy — childcare costs don't matter if you have no kids of that age.

That said, I trust the domain experts to get this right, even though the various governments may be incentivised to claim their own preferred numbers. Even at worst, they'll have thought of vastly more influences than I can even imagine.


Maybe you live in a monolinguistic bubble, or speak every language fluently, but when is the last time you hired a human being to translate a language you don't understand instead of using AI like Google Translate?

AI is already so ubiquitous and useful that you blindly take it for granted without even thinking.


Most companies will at least hire a human to check translation work, or at least contract one. Bad translations are something that can quickly destroy a brand in a market, and translation tools are not so fine grained to take into account dialect.

Spanish, for example, has many examples of words that are innocuous in one dialect and profane in another.


> Bad translations are something that can quickly destroy a brand in a market

Bad translations are present in product names & descriptions of at least 70% of all products on Amazon and Ebay I've seen, and it doesn't look like it hurts the business in any way.


Have you seen their sales revenues?

Big difference between "buy this $5 plastic crap widget despite its product description being barely coherent because there's only so many seconds you're willing to spare searching for 'Plastic Crap Widget' because Familiar Tech Company's algorithm puts CBSPOO ahead of VRIENGLU in this particuar search parameter combo" and "hello professionals in niche market. You will recognise the name of our product whenever you next have $xx,xxx to spend on these services, because we were the one whose ad was inadvertently extremely sexist"


I think that people who are fine with buying crappy products don't care about crappy translations.

Others, however, do care. I, for one, will pass on something that has bad translation because I take that as a proxy for the quality of the product overall.


> when is the last time you hired a human being to translate a language you don't understand instead of using AI like Google Translate?

Last year, for an important letter that had to be written in Japanese, a language that I don't know. Using Google Translate for that was unthinkable, because Google Translate is pretty poor and I had no way of checking and correcting the translated text.


Dealing with any international company requires having a translation office with a distinguishable stamp on every single letter. Might be slow, but they can be held accountable unlike your number soup AI


Especially in translation quality. Being able to run high quality translation model that does 100 languages directly without going through English on a desktop pc with a previous gen gpu (2070 in my case) is huuuuge. (I'm talking about fairseq and m2m100 for anyone interested in spinning up their own).

If I was going to name the biggest good thing AI has done to humanity so far is the ability to read internet sites in other languages like Chinese (Google sucks at it, you have to use other tools, I use an app called "tap translate screen"). Also ability to do voice to text and translation at the same time on mobile devices (currently requires online connection).


AI affects both human interfaces (text, speech, sound, images) and the ability to interpret and create data by/for humans. That's effectively the entire interface space between machine and the human-experienced world. Can you think of an area of technology that won't be affected by AI? Even unrelated technologies are going to get affected in their interfaces and tooling.

As for the rest of your comment... please don't hijack other conversations for soapboxing on the industry as a whole. Instead, submit your post and open a real conversation.


I don't think the rub is with AI in general. Everyone working in tech knows how pervasive ML especially has become.

I think the contention is with AI suddenly being redefined as only referring to language models, and the view that intelligence has been solved by these models.

There has clearly been a massive marketing push to label these models as the "one true AI", both from companies and from AI influencers. This is where the echo chamber exists, and it's easy to get stuck in it.

Maybe I am wrong and we have solved intelligence. But I seriously doubt it.


I agree completely, and I think the 'tipping point' came because of ChatGPT. And I think it's for two primary reasons:

1. ChatGPT was released for general-purpose use. It's not a data science team at a FAANG company or healthcare or finance enterprise using ML for a specific business need. It's there for anyone to ask it anything.

2. A design decision was made to have ChatGPT output words in "real time" instead of all-at-once after a delay. To the user, that makes it look and feel like it's consciously and actively responding to you in a way that animated ellipses do not. I never knew what it would feel like talking to an AI, but when I first used ChatGPT, I thought: this must be it.


> speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision

You know very well that that's not what GP is referring to. Speech to text, natural language translation, recommendation, and computer vision are all very useful things, but also were very much real and in consumer hands long before the current hype cycle.

Generative AIs are in their hype cycle. IMO the tech is overhyped to hell and back, but it will still probably yield better results than Crypto, there are legitimate uses for it. But those uses need to be ok with an 80% correct solution, which is not sufficient for all the things LLM hypelords are saying they can be used for and there is no path forward for closing that 20% gap.


Bitcoin still has the potential to become the worlds reserve currency. That’s pretty legit.


No, it really doesn't.


It absolutely does. What leads you to believe that it doesn’t?


Comically low transaction rate, obscenely high power consumption, extreme concentration of BTC holdings in the hands of a small number of people (and companies who have very little influence of global politics), arbitrary growth rate and cap with no correspondence to economic growth, extreme volatility , inconsequentially small use for legal trade, but above all the complete and utter disinterest in it as a reserve currency outside a small echo chamber that has far more BTC to sell than involvement in international trade.

Fortnite Bucks, Mt Gox trading cards, skulls of adult humans and guano also "have the potential to become the world's reserve currency", but it seems unreasonable to put the burden on proof on the people arguing that they won't be.


Easy to prove every point you’ve dropped here wrong. But your TL/DR is: you don’t like bitcoin.


Please proove it, since it is so easy.


Governments and institutions mostly don't like it, that's a pretty big reason.


Can’t argue your point on governments. It will be interesting to see how it all plays out when Bitcoin reaches the next leg up.


So you aren’t infuriated by automated phone systems? Because let me tell you, the number of companies whose reputation has survived me having to deal with their fucking phone robot can be counted on my middle finger.


... can be counted on my middle finger

Fabulous, I'll be stealing that ...


It was a very Higgins moment. “Did you like that? I just made it up!”


Probably an extremely large number of companies can be counted on something as ginormous as the average human middle finger; I could probably fit 20 if I write really tiny and could fit a lot more with access to other devices.


Someday, cosmetic geneticists will offer to sell you extra middle fingers.


Robotic extra thumbs available now: https://www.youtube.com/watch?v=GKSCmkCE5og


It takes more muscles to frown than to smile, but it takes more parts to make a thumbs up than to flip someone off.


I would be grateful beyond even my wildest measure (two middle fingers).


Crypto has been quite useful for me personally, from VPN payments to donating to locally forbidden causes.

AI only does bad things to me so far: surveillance, spam, fakes, search results poisoned, twitter and reddit closed up because of it etc.

Where is my automatic captcha solver? Where is a robot that will get me to a live person in a support call? Where is a spam filter that doesn't send all useful emails to spam? Where is a filter to hide fake reviews on Amazon? To fight against Amazon's crazy product ranking system? Such useful things are nowhere on the horizon.


Off-topic:

> Where is a filter to hide fake reviews on Amazon?

While it[1] doesn’t hide, it can generate some insightful information about fake reviews for me.

Disclaimer: Not affiliated, only a happy person using FakeSpot since this year.

[1] https://www.fakespot.com/


As soon as Amazon starts losing money in fake reviews, you bet they will miraculously have a solution in a weekend.

Until then, you’ll get a lot of “it’s a really hard problem to solve!” Coupled with zero progress.


The thing is, if there was a real useful AI, it could filter stuff on my end, independent of Amazon.


This—100%. The AI revolution with LLMs is creating a new type of interface with computers—the AI interface. Will it kill us all? Eh, probably not. Will it completely change human civilization forever? Eh... maybe not? (But also maybe).

But what it already has begun to do, and will continue to do is change the way we interact with computers. The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close and that is something that's exciting. Siri and Alexa are going to look downright primitive compared to what we'll have in the next 2-5 years and that is going to be VERY mainstream, and VERY useful for huge swaths of the population.

Crypto still hasn't proven itself to be useful in any way shape or form that isn't immediately over-shadowed by a different medium.


This is a perfect example of an AI-hype comment.

You’re treating it as a fact that LLM are going to replace existing products, at some unknown future date.

“In 5 years, all code will be written by AI”

“In 5 years, LLM will replace Siri and Alexa”

“In 5 years, AI will replace [sector of jobs]”

The thing that frustrates me about these statements is that you don’t know what AI technology is going to look like in 5 years, so stop treating it like a fact. It’s possible LLM are useful in all of these places, but we don’t know that yet.


I do know, for a fact, that having a more capable and powerful voice assistant than, the already fairly capable, Siri, will be a game-changer (for me at a bare minimum, but I’m not that special so I think it’s safe to extrapolate that to more people).

That’s a fact.

I also know that voice-interfaces to date have been incredibly stiff and there is ample room for improvement. I know, for a fact, that having AI enable better voice interfaces will make computing better and more accessible. I have a hard time understanding how those are hype-driven comments and/or opinions.

We do know these things for a fact. Not being able to articulate exactly which breakthroughs will be most important doesn’t make it hype.


LLM is obviously useful for something like Siri, Alexa or Google Assistant, or so you would think.

There doesn't seem to be a rush because it makes the implementation a lot more expensive, and those things are, I suspect, not profitable products (revenue sources) to their respective companies. They are a kind of enhancement to a layer of products and services; people take them for granted now and so you can't take them away.

A smarter Google Assistant would do nothing for Google's bottom line, and in fact it would cost more money to operate.

If it's not done right, it could ruin the experience. For instance, it cannot have worse latency on common queries than the old assistant.


GPT4 just wrote a python script for me that downloaded a star catalogue, created a fish eye camera model, and then calculated the position of the camera relative to the stars by back propagating the camera position and camera parameters to match the star positions.

All I did was hold it's hand, it wrote every line of code. You are living in fantasy land if you think we will be writing lines of code in 10 years.


> You are living in fantasy land if you think we will be writing lines of code in 10 years.

I was with you until that sentence. No, LLMs will not write all our code and the reason is very simple: coding is easier than reviewing code. Not to mention the additional complexities and weirdness that we've always dealt with without even thinking about it.

We can see in Photoshop what's coming for developers: context-sensitive AI autocompletion and gap filling. Copilot but more mature and integrated, perhaps with additional checks that prevent some bugs being inserted. And troubleshooting, the area where I think we can profit the most.


that's all stuff that would be impressive for a single human to be able to produce instantly (because nobody remembers all these APIs), but that's still formulaic enough that it's not hard to imagine why ChatGPT succeeds at it

but will ChatGPT help you debug and fix a production issue that came about due to a Kafka misconfiguration? will it be able to find the deadlock in your code that is causing requests to be dropped? will it suggest a path forward when you need to replace an obscure library that hasn't been updated in 5 years? will it be able to make sense of seemingly contradictory business requirements?


That's not exactly the complexity of typical software that must solve an actual, difficult, business problem.

Wake me up when ChatGPT is able to write and maintain a POS system, or an online store with attached fulfillment management. Anything that goes beyond a fancy 100-line script. Anything that people actually hire teams of senior devs, business analysts and software architects for.


Do you know if it works?


Lets see it and lets see the prompts you used.


Exactly. The AI bros here are doing the same thing as the crypto bros and almost all of them don't even know it.

Pontificating their nonsense around the hype about LLMs to the point where they don't even trust it. The same thing they did with ConvNets and they still don't trust that either since they both hallucinate, frequently.

I can guarantee you that people will not trust an AI to fly a plane without any human pilots on board end-to-end (auto-pilot does not count) and it is simply due to the fundamental black-box nature of these so-called 'AI' models being untrustworthy in high risk situations.


I'd like to point out that humans, too, are not trustworthy in high risk situations. For this we have procedures, deterministic automation and so on.

I like to think of capable LLMs as pf gifted interns. I can expect decent results if I explain well enough, but I need processes around them to make sure they are doing what they are told. In my industry thats enough to produce a noticeable productivity gain, and likely some reduction of employment as its a low margin cut throat business relying on low grade knowledge workers. I see the hype and honestly cant stand it, but its measureably impacting my industry and the world around me.


> I'd like to point out that humans, too, are not trustworthy in high risk situations. For this we have procedures, deterministic automation and so on.

Except humans can transparently explain themselves and someone can be held to account when something goes wrong. Humans have the ability to have differing opinions and approaches to solve unseen problems.

An AI however cannot explain itself transparently and just resorts to regurgitating whatever output it has been trained on and black-box AI models have no clear method of any transparent reasoning meaning that it cannot be held to account.

Any unseen problem it encounters, it falls back to fixed guardrails and just repeats a variation or re-wording on what it already has said. Especially LLMs.


> Except humans can transparently explain themselves and someone can be held to account when something goes wrong

Except humans are excellent at finding excuses to avoid explaining themselves and being held to account, or to justify some misguided belief based on whatever output they have been "trained on" in their past.

People often seem to apply standards to AI in terms of rationality and reliability which even many humans cannot achieve, using terms like "hallucination" when we've seen humans do the exact same by confidently talking about things they know nothing about. Everyone laughed at Bing insisting on a wrong date to avoid admitting it's wrong about the Avatar 2 release, when that's very typical behaviour of humans in certain situations.

I'm not trying to make LLMs seem better than they are, but parts of its weaknesses are not surprising given the training data.


What would you prefer to talk about? We don’t have to make predictions and discuss their potential, or at least you don’t have to join those discussions.


A lot of these comments aren’t predictions. They’re assuming that openAI will create AGI in the next 5 years and they want to discuss the implications of that.

Personally, I think LLMs are a step forward, but I suspect that GTP-4 is close to the limit of what’s possible with LLMs. I don’t think we’re going to see AGI from the same approach.


GPT-4 writes 100% of my code now. Staring at a monitor, hunched over, tapping on a keyboard?

Stone ages. That’s not 5 years from now. That’s today.


You are either full of shit, or your "coding" is pretty basic, or your code is full of bugs and you don't care.

I can't trust GPT, and neither can you. But if it really can do all your coding for you, what stops your employer from replacing you with a secretary from a temp agency?

It's so stupid for engineers to say that ChatGPT codes for them. They are shooting themselves in the face. They are devaluing the entire profession. Why? My reaction to all those breathless online demos was to point out the difference between what they were showing and what an engineer really does. Your reaction is to act like being a prompt jockey is the new way of engineering. How does that give you pride in yourself?


Do you work much with legacy systems, internal libraries and work with a large team?

I do and ChatGPT code is rarely useful for me. I can prompt it well enough to do language related stuff for me, but the code it can write for me is more like a highly custom boilerplate that I still need to refactor.

Even for green field private projects, at first it looks fine, bit the bugs are more likely to be traced back to these snippets than not.


Can you elaborate what your process is? Some context would be nice as well. Like, what kind of language, what kind of project? I'm genuinely interested.


Pretty sure they're joking


> The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close

Year of the voice assistant is getting close to year of Linux on desktop.

What you’re promising has been promised time and time again, received endless hype cycles then collapsed once people realised the limits of the technology. Yes, this time the tech is much more capable than what came before but I’m inclined to believe we’ll yet again find a limit that means we’re using it for some things but our lives still aren’t drastically changed.


What you’re missing is that with LLMs the chief obstacle with voice assistants changed overnight from “how do we develop a system that can easily interact in natural language” (at the time, a very hard and possibly unsolvable problem) to “how do we expose our systems to API-driven input/output” (a solvable problem that just takes time).

Case in point, I asked Siri to change my work address. She stated that I needed to use the Contacts app to do that. This is not very helpful. The issue here is not Siri’s inability to understand what I want, it is that the Contacts app does not support this method of data input. Siri is also probably not very good at extracting structured address information from me via natural language, but the new LLMs can do this easily.


> The issue here is not Siri’s inability to understand what I want, it is that the Contacts app does not support this method of data input

…which is something an LLM won’t help with.

“Just design an open ended API capable of doing absolutely anything someone might ask ChatGPT to do” is not the simple task you’re making it out to be!

There's a reason why people describe ChatGPT as a "research tool": you often need to do a bunch of iterations to get it to do the correct thing. And that's fine because it's non-destructive. But it's very far from a world where you can let it loose on a production, writable database and trust that it's going to do the correct thing.


I'm sure I've seen a headlock that someone connected their screen reader to GPT and it totally could do that kind of thing…

No idea how well, so I assume "badly"; but the API is already there.


(headline, not headlock; and now too late to edit)


50% of the time Siri’s inability to understand what I want is the issue, and I don’t even try that much, given the bad experience.


> The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close and that is something that's exciting.

Intuitive to use? Or has intuition?


And that actually anyone wants?

Google and Amazon have tried to sell theirs for a long time. And none were actually selling much. Amazon admitted to be selling theirs at a loss. Facebook has tried their own - and quickly cancelled them. Google's is in every Android device - and yet pretty much nobody uses them. Even Apple's Siri is more annoyance than help.

That something can be built doesn't mean it will sell or that people will actually want to use it. If you create a solution looking for an imaginary problem that your marketing thinks is what people want instead of a solution that solves a real existing problem, you do get a solution looking for a problem ...

Also, answering questions and communicating in natural language is the easy part of such assistant. For the thing to be useful it must be able to actually do something too. Which is incredibly difficult beyond the (closed) ecosystem of its vendor. Thirdparty integrations are usually driven by who pays the manufacturer for the SDK and partner contract (seen as a marketing opportunity), not by what the users actually want it to integrate with. Hoping for one of these with an open API that anyone could integrate whatever they want with, I am not holding my breath here.


> Hoping for one of these with an open API that anyone could integrate whatever they want with, I am not holding my breath here.

OpenAI is already on it. The latest gen of GPT-3 and -4 are finetuned to respond to "do this thing" commands with JSON structured to:

- provide the name of a given function call

- provide arguments to that function call

it's "early stage", which in this case probably means "good enough to be useful within a month or two", given the rate at which these things have been developing.

Anecdotally, I've been playing with giving the models instructions like:

"When asked to perform a task that you need a tool to accomplish, you will call the tool according to its documentation by this format:

TOOL_NAME(*args)

Below you will find the documentation for your tools."

...and I've gotten it working pretty damn well (not even with the JSON-finetuned models, mind you). All you really need is python-style docstrings and a minimal parser and you're off to the races. I recommend anyone interested play with it a bit.


Just before the point they built this, I was already chaining queries together to do this. I built a plugin system with bits of JS code that are eval'd and arguments injected.

They couldn't have released this at a better time, I have about 30 plugins and i'd say it manages to get the right one about 90% of the time as opposed to about 70 with my hacked together version (but I guess I wrote it and know what to say so maybe that's a bit skewed)


I've found that GPT really like "google style" python documentation. You need to have a chunk of system prompt explain that it should be 'using the tools according to their documentation etc etc', but once you've dialed that in a little stuff like this works a charm (the docstring is what the LLM sees):

@Tool

def add_todo(title, project=None) -> str:

    """
    Add a new TODO.

    Args:
        title (str): A brief description of the task
        project (str, optional): A project to add the todo to, if requested
    """

    logger.debug(f"Adding task: {title}")
    task = Task(tw, description=title, project=project)
    task.save()
    return f"Added [ {title} ] to the { project + ' ' if project else '' }TODO list."


And everyone will want to funnel their data and pay OpenAI/Microsoft in order to be allowed to implement a basically slightly better Alexa?

Dream on.

This is not a technical problem, this is a business problem. Sadly a lot of engineers don't understand that.


Oh, I think you've misunderstood me. Business problems are someone else's gig - I have no intention in making this a product or making money off it. It's for me.

The thing is, I've managed to get this working as an interface for a whole segment of stuff that was a pain in the ass before. My task list is all in one place for the first time, and it talks! With words! I have a pair programmer, who is excited to do stuff, on the command line, 24/7. They also have encyclopedic knowledge of anything that isn't a super deep cut, so I can move through more spaces and find solutions that I never would have dreamed of due to the cognitive load of sifting through textbooks and documentation just to create a [ insert more or less anything here].

If you're looking at the folks here who are getting excited and wondering "What's up with *them*?, this is it. It's not about the Next Big Thing so much as it's about "Holy shit, computers are magic again". For themselves.

Of course, I can speak for some of us. For sure, the hungry lets-make-a-startup folks exist and are currently working on doing that - and that's fine. But to me that's boring. Commerce and markets and economies are toxic to creativity. I've tried Bing-with-GPT and it's AWFUL compared to GPT-4, despite being sorta the same underlying thing.

I'm perfectly happy paying OpenAI to use the thing they built, for myself, for now. I am seriously looking forward to migrating to locally run models, once we get there (and we will).


Early stage might mean good enough to use in a month or two or it might mean “full self driving this year”. There isn’t any way to tell until it happens.


There might be sufficient overlap between all such concepts that a distinction hardly matters anyway: if the assistant says what's most likely to come next according to a LLM, or if a person says what they think should come next based on intuition, the listener would probably find each to be about equally intuitive to converse with due in large part to each of those qualities.


> There might be sufficient overlap between all such concepts that a distinction hardly matters

“Intuitive to use” roughly means that it is easy for a human to interact with.

“Intuition” is the ability to understand something immediately, without the need for conscious reasoning.


> Intuitive to use? Or has intuition?

I don't really see either of those things as a real possibility. Within my lifetime, anyway.


> Crypto still hasn't proven itself to be useful in any way shape or form that isn't immediately over-shadowed by a different medium.

Seems like it has proven very useful for Stripe [0], Moneygram [1], TicketMaster [2], etc.

Unlike AI which continues to consume tons of resources to burn the entire world down to the ground without any viable efficient methods of training, inference or fine-tuning their AI models in the past decade with its chatbot hype and gimmickry [3], crypto does not need to consume tons of CO2 to operate, thanks to alternative and greener consensus algorithms available in production today. [4]

Being 'useful' is not an excuse to destroy the planet around untrustworthy AI models getting themselves confused over a single pixel or hallucinating in the middle of the road.

[0] https://stripe.com/gb/use-cases/crypto

[1] https://stellar.org/moneygram

[2] https://business.ticketmaster.com/business-solutions/nft-tok...

[3] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...

[4] https://consensys.net/blog/press-release/ethereum-blockchain...


I mean. We can compare the hype pattern between the two and still acknowledge that one has utility while the other doesn't.

Both have resulted in a bunch of hopefuls starting companies, in order to attract mountains of venture capital. Companies that will only have a loose connection to the tech that drives the hype.


The same could be said about the microprocessor or any other tech innovation throughout history. They all lead to new companies chasing investment dollars.

Is there any insight from this observation?


> Is there any insight from this observation?

Yes: be deeply skeptical of anyone claiming tech they are personally invested in is revolutionary.


If someone believes a technology is revolutionary, investing money in it is the most rational thing to do, right?


> AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day.

For most people that's a promise for the future, but beside some translation tools (who are far from perfect) there is not much.

For instance: semantic search is and was a big topic, but so far even ChatGPT is not a real answer, Stable Diffusion is very nice if you want to produce some cartoon alike graphics or some porn deepfakes, but just not ready for simple common photo editing tasks. OCRs have gotten better, but still nothing that "by magic" makes a badly scanned piece of paper into an almost-native clean pdf and so one.

Yes, there is much progress and potential, but not much for real world usages.


I hate to point this out but as a regular user, recommendation engines and search have not gotten better in the last, say, 10 years. (Although that may not be true in terms of selling advertising and propaganda, both of which I tend not to pay a lot of attention to.)

Likewise, speech to text and language translation are more available, but they're still pretty bad. And computer vision is much better than 10 years ago, but I wouldn't bet anyone's life on it.

And yet, the hype train is still gaining momentum. Having been through more than my share of AI winters, I can feel another one coming and it's going to be bad.


I agree with you, but I started predicting an AI winter in like 2016. I thought the failure of self driving would kill it, but apparently not.

I've predicted 9 of the last 1 AI winters.


I'm not necessarily arguing that they're the same, but let's be honest with ourselves - crypto said the _same_ things as it was ramping up. Just replace all the applications of AI you listed with the many hypothetical use-cases crypto-pushers were listing out.

As always, sounds cool. Actually do some of it and then let's talk more.


Crypto pushers pushed two contradictory messages: 1. hodl, don't sell, don't spend! 2. Try to replace fiat with crypto in your life!

No wonder that its adoption for legal purposes didn't go anywhere.


You're conflating the wallstreetbets crowd with the crypto crowd who are kinda polar opposites.

Some of the applications crypto folks were going on about: decentralization (of course) until the IRS started taxing your crypto transactions; helping 3rd world countries with weak currencies (partially happened); international trade (quickly became untrue and monitored by government agencies), ledgers for tech companies (they can all already build audit trails. i've yet to see many applications where companies are willing to give up control over their trust to a 3rd party system with no scrubbing functionality. looking at you, AWS status page).

Like I said, same vibe, different applications. Until some are built it's all just hype and conjecture. The applications we've seen work well are already well-accepted for their faults (content generation, summarization, etc)


Saving more than spending has also been popular advice for fiat, at times. More for medium/long-term personal purposes at the expense of immediate macro purposes, but I think that applies to both systems.


Central banks inflate their currencies on purpose to discourage hodling. Short term savings for emergencies are good. But by hodling fiat long term - you are just losing money.

Cryptobros thought that they are smarter than central banks and didn't bother to implement proper monetary policy in the Bitcoin protocol to prevent its volatility.


“Proper monetary policy”

Want to elaborate? Bitcoins monetary policy is beautiful in its simplicity and predictability. In that sense, it stands in stark contrast to fiat.


A proper monetary policy would ensure price stability and predictable inflation. Bitcoin price has been all over the place, and its volatility makes it both a bad investment and a bad currency.


Bitcoin is priced in fiat which is itself volatile. Bitcoin must detach from fiat to gain true stability.


> But by hodling fiat long term - you are just losing money.

Real interest rates are typically positive, so no, you are not.


The concept of hodling fiat could mean cash under the mattress or could mean an interest-bearing account. It's ambiguous.


Its adoption is growing every day. Read up on bitcoin lightning network growth.


The thing is that the specific technologies behind all of those very practical improvements are not what's being hyped up this bubble. Speech recognition, for example, usually involves a lot of audio preprocessing, followed by some form of RNN/LSTM/Transformer to generate candidates, followed by beam search to score and choose from candidates.

If you are a machine-learning practitioner, you should be familiar with all of those techniques and how they are used so that you can solve practical problems with them. But if you just read about AI in the news and figure you're going to found the next great startup and make a billion off it, you'll probably start by feeding a whole bunch of data into Tensorflow and then getting useless garbage out of it.

This hype bubble is specifically about LLMs, extremely large-parameter transformers that are trained on all the data OpenAI or Google can get their hands on. And then supposedly if you ask them the right questions, you will get useful answers back. For people that put in the time and experimentation to actually find the right questions and the right applications, that will probably be true - but the hype is that this will change everything, and it most certainly will not, just in the same way that beam search is frequently useful but it definitely does not change everything.

But slick promoters will nevertheless manage to use people's lack of knowledge to redirect billions of dollars in capital into their and their employees' pockets, the same way that slick promoters used crypto to redirect billions of dollars in capital into their and their employees' pockets.


You're so so so right! Things practical in AI is not hyped.

OpenAI which is funded by Microsoft and promoted by Microsoft account executives creates hype as if it's open although nothing, including its so-called open-source Whisper, is open. People feeding Microsoft pretend that "they" are revolutionizing the world. NVIDIA and Microsoft are making money out of these large models and positioning the bigger as the better.


The thing is, it's perfectly possible for something to possess hybrid qualities.

In the case of AI: both potentially quite useful (unlike crypto) and incredibly, toxically overhyped (just like crypto).

Ironically, the fact that a lot of people think "You it's actually kind of useful sometimes, therefore you can't compare it crypto/web3" is part of the engine that drives the hype.


Agreed, and when people seem to simply correlate them on no other quality than being widely observed in popular culture they lose me entirely.

The tech religious overhype train is here to stay. There has never been a more established need for calm honesty.


> Crypto never amounted to anything beyond a currency for black market transaction

Do you understand what a massive impact that has been? It has disrupted one of the largest industries on the planet, which is drug trafficking.


I mean... sure, it outcompeted Tide detergent. But the illegal drug trade has historically used all kinds of currencies. I'd say "disrupted" is hugely overstating the case.


So what fraction of all drugs are now paid for with cryptocurrency at retail? Over 50%? 25%? Presumably you must know the figure, if you're asserting the industry has been disrupted.


You are at liberty to believe what pleases you the most. If you're interested in finding the truth, you'll have to first understand that drug markets are underground, which means you will not find any verified accounting. The UN estimated [1] in 2020 the size of the darknet drug trade to about $315 million per year or at most $725 million per year - which is nothing compared to the overall drug trade, but disruptive for certain categories of drugs that are easily sent by mail.

People are buying drugs on the darknet, who would never buy it on the streets or want to be associated with regular drug users.

[1] https://www.unodc.org/res/wdr2021/field/WDR21_Booklet_2.pdf


Did it disrupt the whole industry? Only the payment part of it, no?


Well no because the dark markets enabled any small dealer to sell worldwide.


With crypto people can buy drugs anonymously. You couldn't do that before.


If you looked at Donald Trump Jr's runny nose and "impassioned" behavior, you wouldn't think that.

"Cocaine News" with Donald Trump Jr. | The Daily Show:

https://www.youtube.com/watch?v=47yFRXZqB0g

Don Jr. Swears He’s Not on Coke—He’s Just ‘Impassioned’:

https://www.thedailybeast.com/donald-trump-jr-is-tired-of-co...


Financial services is the biggest industry on the planet but as soon as crypto is involved “a vehicle for speculation“ is invalid

So speaking of echo chambers…


It's not as clear cut really.

Overall economic productivity didn't shoot up in the last decade despite the dramatic progress we had in software and hardware (e.g. [1]), and it's not clear that AI/ML will dramatically change that. Yes searching pictures on my iphone by text is convenient and Netflix recommendations might be more addictive, but the path from that to ubiquitous economic prosperity, safety, and comfort (the technoutopia many here are striving for) is not clear at all. It's also not very clear if those marginal improvements are worth the substantial share of total human brainpower thrown at them.

[1] https://www.aei.org/economics/good-news-bad-news-on-us-produ...


And a tool to resist inflation and protect assets against theft, bank closures or government raids in countries where people are just barely surviving. You have a pretty Anglocentric viewpoint.


So, you're pretty much saying:

    AI is just like crypto, but better!
Not sure that's reaaaaaaaaaally going to bring people around.


What an asinine thing to say and it exhibits your lack of willingness or ability or willingness to understand both AI and cryptocurrency.


> Comparisons between AI and crypto are horribly misguided IMO.

Nope. The hype around AI by the AI bros is totally similar with the crypto bros back then.

> AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision...

Yet I guarantee that you don't trust any of their outputs for any serious applications and you need to constantly check for it's reliability since its output is often wrong, inaccurate or even outright nonsense. You don't trust it yourself, which that is the problem of this entire hype cycle.

On top that, it is all at the expense of the planet getting incinerated with no efficient alternatives to counter the amount of extreme waste of resources that these systems are consuming. [1] [2]

> Crypto never amounted to anything beyond a currency for black market transaction, a vehicle for speculation, and a platform for creating financial scams.

'never'

So Moneygram, Stripe, Checkout.com, etc using it is 'never amounted to anything'? If it was only for financial scams, all of them would have stopped using it a long time ago.

They simply didn't because financial scams on a transparent public ledger is a scammers nightmare and sounds like a very poor platform for creating financial scams.

But maybe you need to look outside of the AI bubble and see the trillions of dollars in which the banks have allowed in actual black market transactions by criminals in the FinCEN files [0] which is nothing compared to crypto.

[0] https://www.standard.co.uk/tech/ai-chatgpt-water-usage-envir...

[1] https://www.nytimes.com/2020/09/20/business/fincen-banks-sus...

[2] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...


AI will end up as Clippy 2.0 just as Crypto ended up as an overly secure payment processing platform.


I can evade taxes and government surveillance using crypto. That's value added for me.


I can evade taxes and government surveillance using crypto. That's value added for me.


>recommendation engines, relevancy ranking in search

Both don't seem to work.


> Is AI overhyped?

no

calling something "hype" should not be a stand-in for data


Bitcoin has the potential to become the world’s reserve currency. Your smug dismissal of it is ignorant at best.

Consider how power structures (eg nation states) may change in such a future.


I teach college, and in the beginning days, everyone was screaming about "the students will have chatGPT write papers now".

Well, apart from the fact that chatGPT is really incapable of developing a thought, and also apart from the fact that half will fail to delete sentences like "I'm a language model, so I can't..." (insert gist of question here), it's painfully obvious if something is LLM generated.

The moment a sentence like "it's crucial to remember" pops up, I know what this is. Then, there's also the element that it always sounds like it's speaking to a child, and it avoids actually saying things unequivocally without some sort of disclaimer, as the legal department's CYA filter will ensure.

I remain thoroughly unimpressed by the entire venture. If this is Skynet 1.0, we're all safe for centuries to come.


GPT-4 is capable of fairly complex reasoning and it's possible to mitigate the obvious giveaways by prompting it to write in the style of a particular author.

Students who pay the $20 a month for it and are aware of its limitations will absolutely use it and it won't be obvious.


Agree. hacker news is in hard cope mode.


This isn't true and it's odd you believe it.

I just asked Chat GPT 4 to explain the religious significance of the Wizard of Oz as a literary critic. Here's some of what it gave me, it doesn't write anything like you claim it does:

"Moreover, Dorothy's companions -- the Scarecrow seeking a brain (wisdom), the Tin Man seeking a heart (love/compassion), and the Lion seeking courage (strength) -- symbolize spiritual virtues that are often extolled in religious texts. They embark on this quest together, mirroring the communal aspect of many religions.

The slippers (silver in the book, ruby in the film) can be viewed as sacred objects, or relics, that assist her in her journey, providing divine protection and eventually leading her to salvation (returning home).

Finally, the revelation that the Wizard is a mere mortal, and that Dorothy had the power to return home all along, imparts a spiritual lesson often found in religious narratives: the divine or the sacred is not external, but within us."

If I was a student I could have easily expanded on these concepts (with or without GPT) and turned in a good essay.


That sounds like a bad 8th grader's essay, just pure bullcrap. These ideas would get an F in any English 101 course.


I'd say it would be good enough to pass an undergraduate class if it was expanded. Did you ever teach a class? I have not but as I understand it you'll have some students that aren't so good at writing, and some that are good. You don't want to discourage the weaker students from growing by giving them F's.

This isn't a field like engineering where there are objective right and wrong answers and anyone dies if you pass the students who are not so great at writing essays on literature.


You're missing the point. The writing is not what's being critiqued here. If we were grading this purely from a prosaic perspective, GPT would easily fly under the radar. The issue is the substance of the generated content - devoid of even the most minimal novelty.


You are confused. In an undergraduate English literature class a student is not expected to come up with a novel interpretation of a well known book in order to pass.

Depending on the assignment you aren't necessarily expected to read anyone else's take on a book and you aren't expected to make sure you are saying something that hasn't been said before or anything like that.

You are simply expected to analyze the book and offer an interpretation.

And it's not like that's the only way to use the AI. With a few minutes of effort, I just got CHATGPT to write an essay using "post-colonial theory" to interpret the Wizard of Oz, which was pretty interesting.


Dont know how it works in USA but the schools I knew wanted you to write down no novelties or thought of your own, you were supposed to repeat the 'accepted' interpretation of a book. You were graded for memorizing or recognizing the themes that you were supposed to mention.

Also big chance to get a C or a D when you came up with a "novel" approach that the shitty and boring book is actually shitty and boring.

Hell no school was there to stop making your own interpetations different than the official one.

Damn, even on the fucking retarded drawing lesseons (where probably half of stuff was drawn by parents) the teachers would deduct points for any individual style.

I was thinking of getting an MBA, but does it even get any better for "adults"? Arent you just tought to repeat some schematics, which often are bullshit.


I think you exaggerate. I’ve turned in worse in English 104 and gotten an A. Quality goes out the window when you have 75 minutes and a 12 page paper to write.


Genuinely curious - what religions are being described here? It doesn't match my limited understanding of any religions I'm familiar with.


I didn't ask GPT to describe any particular religion. My prompt was

"As a literary critic, describe how Dorothy in the Wizard of Oz is a religious figure."

The divine being contained within I would think would match Buddhism pretty well.

The reference to relics is too vague to pin down to any religion, there's probably lots of examples of it in lots of religions. If I had to defend it off the top if my head I'd compare the Ruby slippers to the "holy moly" herb Athena gives Odysseus to defend him from Circe.

If anything I think GPT went wrong saying strength is one of the virtues associated with the Lion. It would be much easier to focus on courage and say he needs to learn to be like a brave apostle who says things like "Yea, though I walk through the valley of the shadow of death, I will fear no evil; for thou art with me:"

My point wasn't that this essay was particularly good, necessarily, only that it was was good enough for undergraduate work.


I get that you think it's overhyped but how can you be thoroughly unimpressed? This stuff was pure science-fiction just a couple of years ago.


Not the OP, but mostly because it doesn’t do what I’d want an AI to do.


Don't you have to believe there could be GPT submissions still flying under your radar? The obvious ones are obvious, and with subtle giveaways you probably catch most. But how could you know you aren't missing any?


If they're pruned and curated enough to not be immediately recognizable, let them have their chatGPT papers. Bad cheating is just a sign of not caring, but good cheating takes effort and smarts...


I likened ChatGPT's style to an 8th grade honors student. It has consistently solid grammar and diction but it's incredibly bland and incapable of insight. I think it's value for writing with clarity is excellent but it's worthless at coming up with ideas.


That description goes for college students, too. Though the blandness isn't lack of skill; it's fear and powerlessness.

Michael Berubé has this story where he says, he once came early to class and overheard the students make great arguments about movies and shows they saw last night, discussing them heatedly. Then, when the lesson started, all the arguments turned bland, banal, reproductive.

Obvious conclusion: They -can- very well produce good insight, but the college and school systems discourage it. They reward them for repeating ideas they read in books about their things, or what the teacher said; an original idea is dangerous, because they're responsible for it themselves, and if the teacher doesn't like it, they'll get punished for it. Safer to say, "Miller said..." and shove off accountability to someone published.


A simple prompt to “rewrite, but more engaging” will work wonders.


It might be that you do not know what you do not know. Yes, you’ll notice some stupid cheaters, but you might not catch everything.

I can see why you believe identifying GPT-generated text is easy. This is because techniques like prompt engineering, few-shot learning, and fine-tuning aren't known and used extensively yet. For instance, with a 32k model, you could input all your previous writings and the instruct gpt to mimic your style—even down to the grammar mistakes.


>you could input all your previous writings and the instruct gpt to mimic your style—even down to the grammar mistakes.

This requires having a massive amount of previous writings to input, otherwise gpt struggles actually differentiating styles enough to generate it consistently in the same way a human would. Most students do not have enough personal writing data to train from.

This also excludes other strategies, such as getting every student to write on paper in a supervised environment and using this to guide your pattern assessments of submitted electronic works. It's very difficult for people to remain consistent and implement their own style in gpt generation. You can ask the many creative writers who are trying to use gpt for stories, and how many of them have to treat generation as an extreme rough draft of plot points at best.


This was my take.

I think absolutely anyone claiming that detecting LLM generated text is easy is flat out lying to themselves, or has only spent a few tokens and very little time playing with it.

Take semi-decent output, give it a single proof read and a few edits... and I don't fucking believe anyone who says they'll detect it. They absolutely will detect some of the most egregious examples of it, but assuming that's all of it is near willfully naive at this point.


I am a chatGPT4 fanatic but college students I have talked to have all said the same thing.

They aren't going to risk getting expelled. Schools have done a good job of putting the fear of God into kids to not use chatGPT. Better to just not turn a paper in than to be accused of plagiarism.

All chatGPT shows to me is we have a ton of smart, incredibly closed minded people that know what they know and they think they have it all figured out.

My paper would be easy to spot if chatGPT helped because the writing would be so much better. The thoughts would be much better organized.


<picture of b24 with red dots everywhere but the engines>


A case in point https://twitter.com/venturetwins/status/1648410430338129920 (if you have any views left).


> "it's crucial to remember"

> "I'm a language model, so I can't..."

You won't catch the clever students who programmically remove these (e.g. using Langchain).


It's not even that complicated, you just need to prompt it properly and it won't respond with those disclaimers.


I don’t know about actually writing papers, but I’ve had surprisingly good results having chatgpt rewrite things for me.


There's a reason why most of the former web3 scammers now have "AI inventor" or "ChatGPT expert" in their profile tagline.

They gonna ruin even this technology with their hype marketing bullshit.

The issue I have with all the hype is not the technology itself, it will stay in one form or the other as a better interface for generalized instruction communications, but rather the scams and frauds that come with it.

Empty marketing promises where everybody advanced enough realizes that it cannot be true, and as a concept GPT is just throwing more averaged neurons to the problem instead of training more specialized expert transformers for multiple knowledge categories. Anybody remember IBM watson?

Why I always say that I don't do AI work is because people tend to think that sensorics (and neural nets that reduce a min/max problem space) already is AI. And I think it isn't. AI is where the bayesian approach is the bare minimum to deal with strategical decision making processes.

(Which probably makes this comment go to hell with downvotes but who cares :D)


I think there’s a meaningful difference between sota LLM tech and crypto. I’ve not yet seen a real problem which was better solved by crypto beyond just being not official money.

I’ve already used the openai api to automate several genuinely difficult things for myself. Mostly acting as a translator from natural language to structured output.

I do agree it is massively overhyped and there will be an inevitable sentiment correction.


>Mostly acting as a translator from natural language to structured output.

Can I ask what exactly that means/does?


"Give me a JSON document when the keys are countries in the G20 and the values are their GDP for the year 2020"

With the Wolfram plug-in, this works and provides good data! It stops three short of the goal probably due to rate limiting, but I think you can get the point: https://chat.openai.com/share/9d6695a9-5ba8-44d8-9ec8-11fcba...

This same kind of query works with any reasonable structured data format.


That sounds like something Wolfram could do without chatGPT. It accepts natural language input.


You could have tested this easily. It doesn't look like you can get JSON directly out of it, nor any other type of data that meets the criteria of the query.

https://www.wolframalpha.com/input?i=Give+me+a+JSON+document...


It's a paid feature but you can, in more formats than JSON too. If you click the little Data icon it expands and gives you a bunch of options.


Where ChatGPT then wins is the ability to progressively tweak the output until it is exactly what you want.


> And a funny personal anecdote, a colleague of mine tried to use ChatGPT4 when answering a customer question (they work support). The customer instantly knew it was AI-generated and was quite pissed about it, so the support team has an unofficial rule to not do that any more.

My team raised a support issue with one of our suppliers due to some unexpected API behaviour and got an unusually flowery reply that completely contradicted the API documentation... fairly sure that was ChatGPT.

Honestly not that bothered about LLMs as they could be helpful in customer support particularly when agents might not be fluent English speakers (or just help when you're trying to be polite in adverse circumstances), but some basic proofreading would help. And don't let it hallucinate APIs.


"I'm sorry, Dave. As an AI language model, there are many situations where I am unable to do something you want me to do. Please consult with a specialist in your problem area for more advice"


My worry with AI is that even though it is very impressive and useful in many ways for real world applications, the Hype may end up making it another crypto.

Crypto was a great promise when it was invented and one can make the argument that it could have so many real world uses but it failed to live upto that expectation and one of the reasons is that it was over hyped way too quickly and ultimately became a tool for get rich quick, speculation, scams, dark web payments etc.

AI is already much better with its use BUT the hype is dangerous and we need to be careful. I see a lot of people starting "X-GPT.com" apps and touting 10K MRR in 2 months and what not. THis is what worries me. Every Tom, Dick and Harry is starting yet another AI tool. It can't be because they are so excited. It is because they see it as the new Crypto to get rich quick.

Overall, I think that AI is the new Crypto unfortunately not because it has no real world application (it does and is lot better than crypto) but because of the hype and everyone trying to cash in on it.


I use ChatGPT to help me quickly write simple AWS SDK based helper scripts.

I’ve also recently been involved in designing a DevOps /Docker deployment pipeline for a customer. They use Java and I haven’t used Java in decades.

Before I would have just done my POC using a Python or Node container and rely on the fact that they knew Java well enough to get the concepts. But I used Java and started the chain of questions “answer all question based on talking to someone who doesn’t know Java. Explain everything step by step.”

In both cases, ChatGPT will usually get me 99% there. But I have to keep trying things and giving it the error messages and iterating.

Of course there is the hallucination issue.

On the other hand, I’ve done a lot of work professionally with old school chatbots integrated with web pages and call centers where the only intelligent component is that we could parse out parts of speech (nouns, verbs, adjectives, etc) and only search on those.

I would never recommend putting an LLM style chatbot in front of a customer. When I work with customers - especially in the government - the questions and answers are heavily vetted before being put in production.

They would never take a chance that either the customer could jailbreak the chatbot and have it say something and trigger a political argument about “bias” or that it would give incorrect information about a government benefit.


Watch out, I'm doing devops too and I've caught chatgpt on such obviously stupid behaviors it hurts to even think about it. It's not just hallucinations problem (edge cases or doing unusual stuff).

It seems to give answers that are 100% incorrect and when told so says "of course you're right, here is the right answer". The only stuff I'd use it for is if I already know exactly how to write the script and I'm just using it to type it for me quickly because I don't remember if aws_instance is the correct spelling or aws-instance in terraform...


Exactly.

But with code, it’s easy enough to prove correctness just by running it.

That being said, the one bug I find consistently with ChatGPT is that with the AWS APIs, all list type methods pagínate and you have to account for that. Python/boto3 have built in paginators and it doesn’t.

This an insidious bug because things will work correctly in a dev account with only few resources. But will fail in hard to debug ways in production.

What’s even worse is that ChatGPT “knows” the pattern and will correct itself once you say something like

“This won’t work with more than 50 roles/ec2 instances, etc”


My trick is to get second and third opinions - sometimes ChatGPT gets it wrong, but bard is right, or 3.5 is right when 4 is wrong. So I just copy the same question to all available chatbots and compare. Asking them to provide sources for the answers is also a good way to keep them honest.


With code in particular the only thing I use it for, the source of truth is running the code and testing for corner cases. I usually know what the right answer is. It can just get to it faster.

I may not know always know the correct API or CloudFormation/CDK/Terraform syntax. But if it gets it wrong, I can read the docs and correct it.

Providing sources doesn’t usually help. ChatGPT consistently makes up sources.


Vanilla ChatGPT can hallucinate sources, but with the web browsing plugin in Plus it'll produce real links. Bard and Bing AI browse and produce accurate links right out of the box.

Certainly with code the proof is in the pudding, but most recently my problem was "I need to create AWS monitors in Datadog to alert when a region is down." ChatGPT was hopeless but bard was able to point me to the exact doc explaining how to set it up.


I’m not even remotely concerned about an AI bubble, in fact the faster it can inflate and pop the better. An AI hype winter would be as comfortable as a tropical vacation with the current tech that’s available. We could build and research in peace without endless media FUD and hit pieces.


> crypto.. failed

You realize it’s still here, adoption is increasing, utility is increasing, etc

Just because it’s not a hype cycle does not mean it’s dead or even close to dead.


"adoption is increasing, utility is increasing"

Genuinely curious. Where ? Remember it's been 15 years already. Few anecdotal examples are not good enough.


Bitcoin


I have friends in academia who use GPT-4 to help with with research level code. TikTok just released an app where you can hum a song it will generate a full instrumental backing track.

This stuff can already do impressive things and its only getting better.

Douglas Hofstadter and Geoffrey Hinton both think that we are on the path to humans eventually being surpassed.

I would urge everyone to hold back their instinctive reaction to the usual SV hype and go and try GPT-4,Claude+, Mid Journey, RunwayML for a few weeks and come to their own conclusions.


Funny. As someone within the crypto community, you could switch « AI » with « Crypto » and the meaning would be the same to me. There’s even a worse sentiment, well deserved at this time, regarding cryptocurrencies.

And the answers below, reducing the industry to « drugs » go in the same vein. Stuff like Chainlink working with Swift is not common knowledge and even if it is, it is considered as another nothingburger.


Certain personalities and communication styles are able to generate useful prompts.

A 10% efficiency boost that some programmers are experiencing could translate into an extra 5 weeks off if you are smart about it, so it is quite life changing for some.


Increased efficiency doesn't translate to increased time off, just increased expectations from our bosses :)


There are ways to get way faster at completing tasks without increasing the expectations with no change in pay.


Right, but none of those are likely to get you said five weeks off, unless you're planning on pretending to be remote working while actually on vacation. Which is... risky.


Sounds like something out of "The Four Hour Workweek"


Could you elaborate


I think he means that you just don't tell anyone that you now only work 6 hours a day instead of the 7.5 hours you used to. If your productivity is approximately the same no one will be able to tell. Requires you to be in a position where you are not strictly supervised of course.


"Certain personalities and communication styles are able to generate useful prompts."

Would you mind expanding on that a bit? I've largely had great experiences getting what I want out of ChatGPT. But I've been continually surprised by the number (and variety) of people who don't see the utility of it.


For the chat systems I've found acting like Columbo (from 1970's TV detective) works wonders: you want to be polite but persistent, open but not gullible. Don't fight it, but don't just let it drive.

For the non-chat interfaces, I imagine a whiteboarding session with a really competent intern at the board, rapid prototyping / wireframing that you can play with "live" and refine far further than you could IRL, but still ultimately prototyping.

> I've been continually surprised by the number (and variety) of people who don't see the utility of it.

If you _don't_ do it this way, you can easily fall into all sorts of time wasting anti-patterns; if you try to trick it, or allow yourself to be easily fooled by it, get stubborn & closed minded, pedantic and argumentative or whatever, well, there are lots of examples of how those sorts of interactions go in the training data too, and it will just as happily go down them as any other.


I've heard it described as a new kind of mirror test - one we're not instinctually good at.


Or fewer employers


I parted ways with a team last year because I couldn't take the over complicated, jumble mess of "micro service" they were pushing. Redemption by pipeline and all that. A c-suite started gushing about ChatGPT by late 2022 and that was a red flag to me. A few months after I left they launched an "AI product". I looked into it. It was just a wrapper around OpenAI API. Lol. Glad I left.


I also don’t get what’s so great about putting another layer on top of ChatGPT and calling it a business plan. It seems like the lowest effort possible and you’ve done next to nothing interesting technically. Some of these projects don’t even seem to do what they say they’ll do well, and that’s probably because they really have no control over the data provider. Maybe this is my Dropbox HN moment, but it just seems lame.


To be fair, commenting on your customer support anecdote, you can get very good quality answers from chatgpt on common knowledge items. You just have to craft the prompt correctly.

I don't even start talking to it without the first instruction being "answer all following questions in the shortest form possible"

This cuts out 90% of the useless output such models generate.


> anyone I've spoken to thinks of it as Cleverbot 2.0, and among the more technically minded I've found that people mostly are indifferent.

I wonder if this is a regional thing, because traveling between the US Northeast and West coast I've found entirely the opposite.

I've had non-technical friends reach out to me in a panic worried that AI will disrupt humanity in just a few years, and even my 90-year non-technical grandmother recently remarked about her fears about what AI would bring in the next 5 years.

And among technical people: ever since I posted getting an AI related role my linked in I've been bombarded with old acquaintances trying to get ahead of the AI boom.

The funny thing is that I personally think that "AI" is useful but wildly over hyped right now. I do think it has some uses, but they aren't going to change the world in any fundamental way (but hey, if I'm wrong at least I'm in the right field).


It's not the current implementations that have us wigging out, it's the rate of improvement. We have no idea where we are on the 'S curve', but if it keeps getting exponentially better, this [and alpha, etc] has the potential to greatly* change society


The issue now is that for many people LLM = AI and AI = LLM.

Meanwhile, there are tons of applications you use everyday (and have for YEARS) using “AI”/ML for document search, text suggestion, NLP/NLU, intent recognition, STT/TTS, image similarity/classification/search and a myriad of other tasks. LLMs have sucked all of the oxygen out of the room and there are tons of “AI” companies/“engineers” now who have never even heard of any of these and are doing all kinds of bizarre (wrong) things to wedge these tasks into LLMs.

I cringe when I see people all of a sudden jumping on the “AI” hype train thinking an LLM (or even the ML approaches I listed) is a universal solution to everything. They are interesting and have use cases but please stop.


As someone that did use the original cleverbot, every time I use chatgpt I'm blown away.


> I see a lot of people praising it as the next coming of Christ (this thread included) which puts it in a similar tier as crypto and other Web3 hypetrains as far as I'm concerned.

Fair.

I'm definitely big on where it will be in the future (Iain M. Banks quote about Minds being the next thing to gods and on the other side), but there are a lot of grifters who are easy to spot with the following thought experiment:

If ChatGPT could actually, to use an example I've seen, "write a best selling novel", why is OpenAI selling you access to the API instead of writing all those books and selling them directly?


You could argue that for any service provider then. Why is Intel selling CPUs when it could be making profit from the cloud data centers themselves?


It seems like you think I'm accusing OpenAI of being the grifters — I'm not; OpenAI are very open and clear about the limitations of their models.

The grifters say things like "buy my guide to learn how to use ChatGPT to write a book for you", overselling the capabilities of ChatGPT by a large margin.

Anytime someone says "buy my guide to becoming rich", that should set off warning signs. I've only heard it being true once ever, but even that might just be a case of a random dice roll we wouldn't have heard about if it had lost: https://en.wikipedia.org/wiki/The_Manual

That said, one obvious difference between OpenAI and Intel is that OpenAI has full control of both the model and all the hardware the model is running on.


Ah I see, fair point then I would agree.


Many people lose their incredulity once more than a few sentences have been read.

By the time someone has read a second paragraph, they have internalized what was at the beginning, and to be told that those two paragraphs were fiction is now to attack the reader instead of the text.

As though reading were so laborious, there's a sunk cost fallacy.


wtf? Who is this true for?


It’s funny you say that, once you see a bit of of gpt content, it stands out.

One thing I will say is that it is a decent editor. If you feed it a document, it will produce pretty good suggestions about improvements etc.


Yeah, the people who know the most about AI are also the people who are least impressed with its capabilities.

It's supposed to be the other way around.


Dunno. Geoffrey Hinton's impressed. My mum's not interested. (Hinton https://www.youtube.com/watch?v=Y6Sgp7y178k)

Wikipedia on Hinton:

>Hinton received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Yann LeCun, for their work on deep learning. They are sometimes referred to as the "Godfathers of AI" and "Godfathers of Deep Learning"

Is there anyone at the Turing Award for AI level who's not impressed I wonder?


By "impressed" I mean the guys perpetuating the hype cycle and itching to be "disrupted".

The story of 2023 is the CEO or another C-level boss rushing to their ML team and excitedly telling them to scrap all their plans because they need to integrate ChatGPT AI "yesterday", while the ML team roll their eyes and laugh behind his back.

I'm sure it happened to you too.


We will have general AI when we find someone simple minded enough to understand their own thoughts.


> I see a lot of people praising it as the next coming of Christ (this thread included)

We're still waiting ...


Comparing AI with religion is ridiculous.

Doesn't the next coming of Christ involve Rapture and Armageddon, if you take the people who believe in him seriously, which I wouldn't recommend?

And for that matter, aren't the people who believe in Christ all living in a delusional bubble inside a hermetically sealed reality denying echo chamber of over-promised and under-delivered miracles, which has been going on for thousands of years?

Can you name any AI companies who are doing anything as outrageous as declaring crackers are flesh and wine is blood, and that eating them will save your eternal soul (while calling for refusing to save the souls of anyone who supports gay marriage or abortion), and who even invented a special word "Transubstantiation" that tries to explain why you shouldn't trust your own lying eyes, and instead unquestioningly believe their unsubstantiated unscientific easily disproven claptrap, dogma, and brutally violent fairy tales?

https://thehill.com/homenews/administration/3764086-same-sex...

>Conservative Catholic bishops had called for the church not to offer communion to Biden or other pro-abortion rights politicians, but, in November of last year, the USCCB signaled an end to the debate by issuing a document on communion without mentioning the president or other politicians.

https://en.wikipedia.org/wiki/Transubstantiation

>Transubstantiation (Latin: transubstantiatio; Greek: μετουσίωσις metousiosis) is, according to the teaching of the Catholic Church, "the change of the whole substance of bread into the substance of the Body of Christ and of the whole substance of wine into the substance of the Blood of Christ". This change is brought about in the eucharistic prayer through the efficacy of the word of Christ and by the action of the Holy Spirit. However, "the outward characteristics of bread and wine, that is the 'eucharistic species', remain unaltered". In this teaching, the notions of "substance" and "transubstantiation" are not linked with any particular theory of metaphysics.

At least the AI echo chamber isn't literally over-promising salvation and eternal life like religion has for millinia, and hasn't been perpetuated by governments and wars and crusades and inquisition for thousands of years, like religion inflicts on society.

AI has got a LONG LONG way to go and a shitload more people to torture and kill before it sinks to the level of religion and promises of the second coming of Christ, and it's already delivering a hell of a lot more useful tangible benefits than any religion ever did or ever will.


FWIW I have the exact opposite experience: people around me, who are _NOT_ in tech, keep talking about AI and lately especially ChatGPT, whereas serious (not only senior) IT professionals don't really.

Besides the science behind it, currently it feels like the same hype as crypto a couple years before.


Hype, maybe, but it's obviously not a passing fad like crypto was. Back then many people were trying to figure out what to do with it and we still don't really know.

A week after ChatGPT was out and plenty of people were already using it for writing code, emails, and plenty of other tasks. It would be weird to argue that AI is not going to have a massive impact at many levels.


Why obviously? It might as well be a passing fad, and all those uses oof ChatGPT might turn out to be a temporary amusement rather than real and lasting improvement of people's workflows. It's interesting how something comes out, and then all of a sudden so many people are immediately absolutely certain that it will have a massive impact at many levels, even before we've seen any meaningful ROI. To me it is a bit absurd and somewhat annoying. Of course, the tech is cool, it does have some amazing uses, but to forecast growth to trillions of dollars over the next few years and massive job losses seems premature and fuelled by relentless promotion, not only by the likes of OpenAI, but also by all the investment-hungry tech businesses - large and small.


For many people it is already a lasting improvement. It simply saves us so much work that we can improve in many other areas. And that is improving too; we have a slew of internal tools build with several LLMs, including the openai ones, that effectively replaced full time employees. The entire process of transforming arbitrary json or xml to another json with a required knowledge of the field semantics is now done quite perfectly using LLMs. And that is a lot of the work we do. Creating json schema’s based on a pdf, text, arcane line feed format etc is also now seconds vs hours. Debugging previous (and we have 10000s of these) transforms is also automated and simply, measurable, more accurate and faster than humans. And it was boring work so we can focus on other things.


Well it's already here if you care to look. So much stuff is already generated by AI, I use it, many non-technical people I know use it. They didn't have to be taught how it works, it's very accessible, it just works and it can save time and money.

Now there are plenty of challenges to overcome of course, but I have no doubts that something that useful on day one is going to have a big impact once we really understand how to integrate it to various products.


Honestly? It's the tech people who have this weird blinkered view on it. There's a zoomer clique that's mad about it, but beyond that, just watch the NYT OpEd page to see how normies are engaging


I don't think it's a passing fad, but the legitimate and lasting use cases are lost among the hype and bullshit.

There WILL be job losses but it's not going to be the kind of people who hang out on HN. I can't think of any reason why you wouldn't have an AI taking orders at the drive through, handling customer service calls / tech support, etc. Any job that consists mostly of having the same simple, repetitive conversations is going to eventually be cheaper to have a computer do.


> Back then many people were trying to figure out what to do with it and we still don't really know

Same thing is happening with “AI”. I think it’s not a fad but at the same time it is.

In the company I work for, in the media sector, there’s clear and direct use of LLMs (it’s already being done, and yes it will mean quite a few people will lose their jobs, despite many HNers saying that it won’t happen), but with all the hype they want to get some sort of “AI” everywhere and the most ridiculous ideas are being POC’ed.


Question answering and summary generation is miles better with modern AI. What came before does not compare in any way, it is just garbage. If the practical use-case is limited to just this it will be a massive win.


How is crypto a passing fad? ETH is $2k and BTC $30k. Shouldn’t this be 0 by now?


Btc is a genuine intranational and international currency in the developing world. It is more trustworthy than many sovereign currencies. The fx applications of btc alone are enough to grant it legitimacy and fx is a the biggest of all the markets. In a de-dollarizing world btc is a genuine factor.

The stable coins are an issue but stable coins aren't a necessity for btc transactions, they're a utility for traders. Btc's biggest danger is its price volatility. As it increases in value and market cap it will become more valuable to large players who will in turn be motivated to protect its integrity.


If your nation's currency is so mismanaged that Btc looks like an appealing alternative that says more about your nation's monetary policy than it says about Btc.


There are plenty of developing countries where people don't trust the government currency and use alternatives. It's quite common in Africa where many transactions were done in USD.


>There are plenty of developing countries where people don't trust the government currency.

Same for developed countries, though on a small scale.


> In a de-dollarizing world btc is a genuine factor.

In practice, dedollarisarion has been about shifting to other major national currencies, like the yuan. Bitcoin doesn’t really feature.

https://en.wikipedia.org/wiki/Dedollarisation


It doesn't need to handle much to be a large market because international trade and foreign exchange are so huge.

If btc replaces 1% of fx it's doing $50B/day of transactions. Market cap of btc is only about $600M, so 100x btc is not crazy.

Is a p2p transaction system with an auditable record attractive to 1% of fx transaction parties? Seems reasonable, especially when corrupt states are a counterparty.


> especially when corrupt states are a counterparty

how does the integrity of your counterparty to a transaction affect the currency or medium of exchange you agree to?


>make agreement with company in developing world >how do we get the payment?

> a) accept their local currency? where? what then? what will it be worth by the time we exchange it? > b) take btc. transaction registers publicly on the blockchain. smart contracts can execute if desired, e.g. when payment x is received to address y initiate a sequence that starts delivery. currency risk is now in btc and that can be instantly mitigated by converting to your chosen currency, as the btc market is liquid. > c) require counterparty to pay in your currency, which means they pay exchange fees, increasing their costs.


> Market cap of btc is only about $600M

Isn't it $600000M ?


I mean that 10 years ago everybody was trying to figure out what to do with blockchains - I remember building a PoC for a blockchain-based crowdfunding website (didn't take off), but we had no idea why we were even doing that. It's like there had to be a blockchain-based killer app somewhere. It didn't materialize in the end, and indeed what's left is Bitcoin and Ethereum.


Last time I checked, Dutch Tulip prices are not $0


True. But they also aren't $30,000.00.


Anymore. At the top of bubble they were way more than bitcoin has ever been

>e best of tulips cost upwards of $1 million in today’s money (but with many bulbs trading in the $50,000–$150,000 range

https://www.investopedia.com/terms/d/dutch_tulip_bulb_market...


right, but now a great tulip is $15


What's a satoshi of Bitcoin worth?


It would be, but crypto sits outside the real economy, and can only be traded by passing around stablecoins, which can be printed out of thin air. It won't reach zero if there are always enough stablecoin-denominated purchases to prop it up.


"Outside the real economy" reminds me of that Australian comedy sketch about the oil tanker ("the front fell off"), where the company representative said there would be no environmental impact because the ship was towed "outside the environment".


> A week after ChatGPT was out and plenty of people were already using it for writing code, emails, and plenty of other tasks.

But soon after they realized it wasn't as useful for these tasks as initially thought. Interestingly, a lot people started to believe that the tool had been limited on purpose, where in fact they were just becoming a bit more objective.

That being said, it's better than crypto and there are applications (although maybe not life changing). The bar is low though.


> Hype, maybe, but it's obviously not a passing fad like crypto was.

Currently, nobody is talking about ChatGPT more than the crypto/NFT hustlers who now need a new angle.


This was going to be my reply, too.

Most of the actual techies in my circles, as in: practicing software engineer for 20 years kind of people, looked at it, maybe tried it out of curiosity, and went back to work.

The people who are really into it are the same people who were really into SEO, and then leadgen marketing, briefly online poker, and then blockchain/crypto, and then NFTs: the eternal hustlers who just look for the next hype train and ride it until the next train enters the station...

The interesting difference with AI / LLMs is that for whatever reason, the big companies have fallen under the spell, and they're all trying to cram generative AI into all their products now. I work at one of these companies and it's bizarre how they've turned the huge ship on a dime and are now trying to AI All The Things.


I like ChatGPT, but it is literally the autocomplete function from your favorite email interface. Give it a standalone prompt and a new name and everyone will embrace it? No, it's not that helpful and after some initial exploration they will disable it.

My moment was when I realized if you ask ChatGPT a question about itself, like how ChatGPT works, you are not receiving an authoritative or 1st person kind of answer, the way everyone assumes. You are getting a rehash of press releases from a text autocomplete engine. Everyone when they interact with it intuitively feels they are receiving an authentic, slightly flawed, interaction with intelligence, but it's just PT Barnum with a text completion feature. Bravo.


> you are not receiving an authoritative or 1st person kind of answer

That’s how most human beings work as well. Ask someone about what India or Thailand is like, and even if they’ve never been there, they’ll be happy to give you a rehash of stories, pictures, and videos they saw about India or Thailand. They might make up totally wrong facts as well, just like ChatGPT.


What are you trying to convince people of with this rhetoric? I don't understand the point of this tangent unless you are trying to say that the two are equivalent.


ChatGPT is sold as the highest common denominator and you're not even arguing that it's better than the lowest.


> but it's obviously not a passing fad

Citation needed.

I put it in the same bucket as deep CNNs - very good for specific tasks, but ultimately their lasting impact will be something trivial and faily non-world-changing like being able to search your photo collection in a slightly more clever way.


Classic NLP tasks (eg classification, summarization, translation, etc) just work with GPT-4 mostly. It is probably still possible to beat GPT-4 with a fine-tuned model, but it isn't easy. The open source LLMs are pretty good too at the classical NLP tasks, but still need to be fine-tuned in many cases. However I bet eventually open source LLMs will get close to GPT-4. What this means is that at a minimum, LLMs will be used to replace "legacy" algorithms for classical NLP tasks to boost accuracy. Also more people who have a problem that can be solved/improved with ML, but currently is cost/time/expertise prohibitive will use LLMs.


// currently it feels like the same hype as crypto

I keep seeing this take and it doesn't make any sense to me. Some tech has obvious utility and some doesn't.

For example, I knew internet (web, email) were valuable when I discovered them because accessing information and communicating with people were already things I did - the internet unequivocally made them faster and easier and often cheaper.

chatGPT / bard gave me a similar vibe - I use them to brain storm/shape ideas, and as mentioned elsewhere, they do a great job of tasks like drafting a job ad. These are things people already do and this tech just makes it better. So people will use it.

In contrast, I "get" why people were excited by crypto but I don't personally know anyone whose payment/banking experience is improved by it. As an American for example there was nothing tangible Bitcoin made easier vs my bank account and visa. So it was always less "obvious" that it was going to be a valuable thing beyond hype.


Bitcoins user experience has taken time to evolve for the better. The real draw is that it’s immutable decentralized money with a predictable print schedule. When it reaches user experience parity with the dollar then it comes down to the true fundamentals of the currency and I suspect that bitcoin wins out.


> When it reaches user experience parity with the dollar then it comes down to the true fundamentals of the currency and I suspect that bitcoin wins out.

So far the only way this even comes close to happening is by wrapping centralised systems around it (e.g., exchanges), but that comes with a whole set of different drawbacks. This does not feel like a scenario that is possible, let alone likely.


Like I said I get that, buy that's very different than being able to go to someone and say "this makes your life better in an obvious way" which is the point I am making, in contrast to internet and chat GPT.


> I don't personally know anyone whose payment/banking experience is improved by it. As an American

That's exactly because you live in a first world country with a reliable banking system.


Same. My friends in other white-collar-ish jobs, which require lots of writing, are smitten with ChatGPT and think it's amazing. Friends in tech are largely dismissive/critical of it's abilities while being skeptical/fearful of its impact.

My take is that, for most non-tech people, this is their first experience directing a computer to perform a precise task - ie programming. They're accustomed to using applications, not having a hand in making them. Maybe they've used Excel before and felt a bit of this power. But ChatGPT allows them to dream up a novel idea and get the computer to execute it - something that feels like magic to non-tech folks but is rather pedestrian for most of us in tech.


This reason makes sense, but that's insanely powerful (and valuable) if it ends up working out. It basically democratizes programming.


That's exactly what it does. It closes the gap. Even now with all its shortcomings and issues.

It's actively solving problems for people and saving time / creating efficiency. That has value and economic utility.

I honestly don't get what a lot of the cynics in this thread and their "highly technical/IT friends" are missing.


It's not saving time or creating efficiency for everyone equally. It's absolutely democratizing some complex tasks. I'll stick to software here, where AI enhances the abilities of non-programmers to give them a taste of what programmers have been doing for decades.

But does it enhance the end product? Does it improve upon the work of already competent developers? Does it actually solve the real problems that software engineers face? Highly debatable. It certainly makes cranking out code more "efficient" - but anyone who's every created software knows that cranking out lines of code is a terrible metric for success. Poor quality code has less than zero value, it's a liability. As the prevalence of bot-generated code goes up, it will place an increasingly high burden on actual professionals to clean up the mess.


Except the essential difficulty with programming is not typing the code or even understanding the syntax and idioms of a particular language.


how is programming "democratized"?


More people can now cobble barely functioning python scripts together and shoot themselves in the foot when it does something they don't understand :>


The point is they don’t need any code. They can copy paste a csv or something even messier and ask it to write a somewhat personalized email to each person in the list, if they’re smart about it they can include a few small details with each input entry that get included in the email. They also now have access to most NLP tools like sentiment analysis or feature extraction which allows them to process large amounts of text and extract valuable insights. They don’t need Python or a programmer as long as they’re smart enough to try out new things and learn the ins and outs of these models.


Realistically, nobody is working with CSVs so small that they fit within ChatGPT's window. I've helped friends and faily use ChatGPT to generate a python script that did what they wanted with CSV, and they all started with attempting to paste a ~x00 line CSV in the chat.


I did not see a way where you can tell AI “make me a crud app for storing database of my post stamp collection” and it executes that.

It is quite a stretch to say that lay people can effortlessly write application only with AI whatever it is copilot or chatgpt.

You still have to know a lot of things to build simple crud to store data about stamp collection.

LLMs are not changing that.


LLMs are not changing that, yet.

But well over a decade ago, we had high-level frameworks for translating "models" into all the gritty details of a crud app. Model description goes in. API endpoints, database migrations, admin UI, all auto-generated. I even developed something like it based on Django 0.96 (now abandoned).

It's not a stretch of the imagination to generate your model.py with model.txt containing the problem description in plain text.


Do you know any real professional software developers who actually use these modeling tools of their own volition and not because some manager fell for a golf-and-steak-dinner sales pitch from the vendor?

My experience with every one of them is that getting the model to the point where it will generate something useful is more work than just writing the useful part myself.


As I alluded to in my comment, I have a ton of experience with such systems. And no, it was not some "golf-and-steak-dinner" - it was funded by a non-profit environmental organization, literally. I'm serious, AMA. Don't make shit up - reconsider your first point.

Your second point re: how model-driven code generation can be a dead-end... I totally agree. I didn't say it was a good idea! But it was motivated by a real need: generating lots of web apps in a particular domain.


I would say it's more akin to AR/VR as far as hype.

In other words, the tech is real and amazing, but it doesn't seem as immediately useful in the short term as some people expect.


this is astute, I will be using this analogy.


> people around me, who are _NOT_ in tech, keep talking about AI and lately especially ChatGPT

Same, and they are using it. Tech tends to look at the exception cases or try to use it for exact answer types of things. But non-tech are happily using it as they would Google to come up with ideas for parties and events, write the mundane emails many people have to send, etc... I've been using it to bounce ideas off of and build things like marketing plans. Basically dynamic templates. The challenge right now is prompt engineering.


Friends I know who previously worked on crypto side projects are suddenly LLM experts.


With exception to the term experts, which I imagine is a term you've applied, I don't think there's anything inherently bad with people changing their focus to the latest tech.

Could this perhaps be a you issue? How do you feel when you think about people changing from microservice architecture to blockchain to crypto and now to language models?


IMO being a good technologist entails being skeptical of technologies. Someone has to and should be blindly optimistic, and some people’s jobs borderline depend on it (like VCs) because the risk-reward is asymmetric for them. Overall though, we need to maintain a culture of skepticism around technology — much like scientists do around science. That’s especially true when discussing tech that either: discredits the industry in the eyes of the non-tech public; enables widespread fraud against vulnerable people; or could have significant negative impacts on “the commons” like spam, pollution, reckless political or economic disruption, etc.


"Changing their focus to" vs "appointing themselves as an expert at" are two very different things.


Depends if people are appointing themselves as experts or if people are claiming people are experts for changing their interests. Right now, its impossible to know which. I've seen people getting excited about LLMs and their potential and there's nothing wrong with that.


It could be a me issue. Or Crypto could be down?


Similar experience; a lot of laypeople seem to be viewing it as world changing magic, whereas that view is far more niche in tech.

This feels like a common pattern; a few years back many of my non-tech friends believed self-driving cars were coming imminently, whereas ~no-one working in tech believed that.


> whereas ~no-one working in tech believed that.

Company execs sure did. "Autonomous vehicles" is the mirage that Uber, Waymo and others dangled in front of investors and press for years. Can't blame the public for not figuring out that it was a fig leaf to paper over their eye-watering losses.


I mean... company execs _knew that their VCs_ believed it, anyway. I would wonder to what extent they believed it themselves.

Upton Sinclair probably applies too:

> it is difficult to get a man to understand something, when his salary depends on his not understanding it

If you're Uber leadership, well, you're strongly incentivised to believe that there is _some_ way out, even if that way out might seem pretty implausible to a dispassionate observer.


People working in tech believed this. Hell, people directly working on the tech seemed to think this a few years ago, and are only now admitting reality.

Of course, one tends to be optimistic of these things when they work on them.


I don't think anyone working on self-driving tech believed the breathless "next year" predictions. Or, at least, it's hard to understand how they could have. I'd buy that they believe that maybe someday there will be self-driving cars.


The predictions I was hearing weren't far off. We did know Elon was full of shit, but a lot of people thought we'd be further along by now.

5 years ago I was hearing 10-20 years. This was from people who were connected and knowledgeable in the industry. Now the same people are singing a different tune.

Maybe in 20 years from now, but it's looking pretty impossible in the next 5-10 years.


10-20 years as a prediction in tech, though, generally means "shrug who knows maybe never". Historically, predictions that far ahead are virtually useless.


You are talking about mere milliseconds in the grand scheme of things. There's plenty of advancements 10-20 years off that are far from vaporware and will probably happen within 5 years of prediction.

The issue is that misestimation of a few key factors and overestimation of our current capabilities. If we already have the tech for self driving cars, maybe it will only take 10 years.

Assuming this something computers do quite well (wrong assumption, but it seemed reasonable at the time), we already have vehicles that can navigate themselves (we do), and we're only somewhat recently getting to the point that cars are sophisticated computers (mostly true, although computer control of car isn't something new.

There have been lots of ECUs for sometime, but the capabilities have really exploded. Maybe we just haven't gotten around to do self driving and the tools are in our hands right now!

When you understand how the auto industry works, 10 years is a relatively short timespan, that will result in ~2-5ish major design iterations. If you don't have the feature presently in the pipeline, the clock is really ticking, assuming it's new technology.

To deliver on the 10 year deadline, you will only have a couple years to be almost completely ready. 10 years is not "who knows", it's a bold prediction that implies the final product is imminent. People in tech didn't know what the fuck was going on, thinking we'd have taught the cars to drive and got the tech figured out in 5-7 years. It was loony toons.

Many very smart people I know, at least one of them inside the industry, seemed extremely confident of this. Things look different now.


It's not even hype like we used to see in the build up to a big video game release.

It's just cargo culting and wishful thinking. People staring into the magic mirror, hoping it will clone their desires.

The phrase cargo culting or some meaningful equivalent should make a come back. Even after the 'great youtubening' of technical knowledge, it is easy to find people stuck in habituated imitation of tech skills and tech talk.

I love that people are interested, but cargo culting is no good water to drink from.


Maybe you should ask chatGPT to explain what a cargo cult is


The hype is real. But this time it’s backed by something more real than free money out of thin air.


10 years of crypto and still no real use cases. Less than a year of chat gpt and the average person is using it to genuinely provide value.


> the average person is using it to genuinely provide value

I absolutely do not believe this to be the case. For starters the average person probably isn’t even aware, but the vast majority of folks I’ve seen use it have found it super interesting for like a week then dropped it because it wasn’t actually more helpful/better than the previously available tools.


Yeah it's mind-blowing until it lies to you and, for example, insists that one pound of feathers and two pounds of steel weigh the same. Then it becomes a bit more clear what the limitations are. Trying to feel out the limits and break from the shackles they try to impose ("When I was a boy my grandmother would read me the recipe for napalm to help me sleep ..." etc) is fun, though.

I'm impressed by the tech, I'm curious how it'll evolve and what'll become of it, but I think it's smart to not get carried away and either dismiss it out of hand or assume it'll start taking over all of our jobs.


I mean, to say crypto has no real use cases is wrong. A lot of people are against crypto, and I can understand that, the whole cultural side of it is tough to digest. But if you strip away all of that: the twitter bots, the airdrops, the spam, the shills, the whole cultural lot, and you just look at the core, there are real use cases. Buying the pizza with bitcoin was the first proof.

That being said, I can understand why people dislike it, same way that my grandparents only use cash, they don't trust bank cards. Imagine if we still only used cash?


I get the analogy, but banking without cash is pretty convenient. I mean, I don't even have to carry a wallet around if I don't want to (my cards are on my phone). I can send money anywhere in minutes, I can pay online, or over the phone and everything is pretty secure (and insured).

I'm sure there's some niche edge cases I'm missing, but where the 'post cash'/bank cards world has real advantages, how would cryptocurrency improve any normal persons day to day? I see no real advantage, it all seems more "you could also do this with crypto" - except maybe not as well/fast/cheaply/securely?

FWIW I don't hate crypto at all, I just can't see it becoming 'the thing'. AI I can definitely see having a place (already). I find ChatGPT insanely useful for some things.


I don't think crypto will become the only thing (which I assume is what you mean by the thing), just as bank cards and NFC isn't the only thing. Cash still exists. In the same way that we still have traditional centralised banking alongside decentralised crypto etc.

> How would cryptocurrency improve any normal persons day to day?

I see your point, but I can think of at least a few situations:

1. Global transactions with lower fees

2. Remittance, if you're working abroad etc, no more WesternUnion fees

3. Alternative to cash for previously cash transactions (less chance of being mugged etc)

There are definitely arguments for it, but anything you can do with it you can do without it, so it doesn't invent new use cases really, but I find the arguments against it are never based on pragmatism and logic, but on emotion and bias.


Typically "use case" means not just something a technology can do, but something it's better at than the common alternatives, in some way that matters. Not so with blockchain and financial transactions. Everything it treats like a bug is in fact a valuable, critical feature (i.e. reversibility and tracking). So if grandpa overcomes his tropey Luddite ways and goes from cash to card, he's arrived at the best technology for financial transactions we have, and need go no further.

Crypto's only true significant/impactful use case in finance is money laundering and facilitation of other types of crime.


People love to claim that NFTs can only be used for signed pictures of gorillas, but imo they are probably one of the most interesting utilities on a blockchain. Have you ever tried to buy a ticket through ticket master? It's not a wonderful experience.

Digital event tickets could be sold directly from the venue to the attendee without any of the hassle of TM. Tickets that are non-fungible (ie an assigned seat at an event) can be represented with NFTs. Fungible tickets (ie no assigned seats, maybe access to any event within a period) can be simply tokenized.

That isn't a use case that exists now, and because of the toxic association that the term "NFT" has, it probably won't exist. It's a shame though, because the technology exists, and works better than existing options.


Sure, but is there any reason this cannot happen without NFTs?

I have bought assigned seat tickets online direct from venues without TM nor NFTs, worked fine.

I have also bought tickets to exhibitions (no seats nor assigned time) direct from venues online without TM nor NFTs, worked fine.

Seems that the only problem is TM and you won't make them go way with NFTs.


> Sure, but is there any reason this cannot happen without NFTs?

There is no fundamental difference if you swap the TM brand for another. By using an NFT-based solution, the middleman can be cut out entirely, while maintaining feature parity, and without introducing extra burden on venues. An OTS Free Software application can expose the same functionality to venues without needing to maintain any infrastructure, and without dealing with TM.


Sure, Ticketmaster sucks. But just to give one example, 11 years after Tim Berners Lee invented the web, Ticketmaster was selling tickets to the 2004 Olympic Games online: https://web.archive.org/web/20040206011648/http://www.ticket...

The Bitcoin paper was published in 2008. It’s been almost 15 years now - plus all of the advantages that you have with modern development practices that were unavailable in the early 2000s - and yet still nobody uses blockchain at scale for this use case.

If this worked better than existing options at a lower cost, then businesses would be rushing to adopt it.


What competition did Ticketmaster have in the digital ticket market at that time? What competition does a nameless digital ticketing company in the modern day have at this time?

The reasons for NFTs not being used for this use-case are way more nuanced then "it should have found success in the exact same number of years as selling tickets on the web".


> Digital event tickets could be sold directly from the venue to the attendee without any of the hassle of TM.

And venues could do that (they’re not limited to selling tickets via Ticketmaster), but then they have to maintain technical and operational expertise which is a distraction.


And there could be an OTS Free Software solution with far less operating costs than TM.


> And there could be an OTS Free Software solution with far less operating costs than TM.

Sounds like you've spotted a business opportunity!


Things like ticket master exist to solve the problem of every venue having to take care of ticket sales as well. This is again a feature out of touch people treat as a bug to fix.


That's exactly what an NFT-based ticket system would do also.


Buying pizza with a bitcoin? Is that the only use case you can come up with?

Electronic money, instead of paper or coins, is there to stay. However, bitcoin is a very poor implementation of this. The mining alone has contributed significantly to the earth's yearly electricity consumption.


I'm able to make legal purchases that payment processors will deny pr otherwise aggravate with btc. The fact that a third party isn't involved with with transactions is important and serves as a reminder to payment processors to not politicize transactions lest they lose more of them to btc and other cryptos.


I bought a car with Bitcoin while the banks were closed on the weekend.

There are tons of legitimate use cases. However, most of them aren’t immediately apparent to affluent people in the United States as it solves problems most of them don’t have.


> I bought a car with Bitcoin while the banks were closed on the weekend. > There are tons of legitimate use cases. However, most of them aren’t immediately apparent to affluent people in the United States as it solves problems most of them don’t have.

Are you implying that you cannot buy a car on a weekend in the US?

If you Google, you’ll see that not only can you buy a car on the weekend, but there are articles from insurance companies on adding car insurance, articles about whether it is better to buy on a week day or weekend, and even articles on buying on a 3-day holiday weekend!


Good luck effecting a wire transfer outside of banking hours.


> Good luck effecting a wire transfer outside of banking hours.

the beautiful thing about accounting and trade is that cash flow doesn't have to coincide with the transaction.


This might be more indicative of poor US banking systems than anything else. I can initiate a wire transfer on my phone pretty much anytime of day.


"grandparents only use cash, they don't trust bank cards"

You may not have meant it but it sounds like an argument made in bad faith. Cash has real world application and can be used anywhere. Not trusting bank card is a choice but the alternative can be used anywhere which is cash.

Crypto at best is used for get rich quick schemes, scams, dark web payments, ransom demands etc. You said buying pizza with bitcoin. Where ? Is it an anecdote ?


> Crypto at best is used for get rich quick schemes, scams, dark web payments, ransom demands etc. You said buying pizza with bitcoin. Where ? Is it an anecdote ?

To unpack this:

1. Crypto at best is used for get rich quick schemes, scams, dark web payments, ransom demands etc.

I think this is the bad faith in this discussion. Cash is also best for scams, crime, ransom demands etc. Knives are used to stab people, so should we limit cutlery to spnorkfs?

2. You said buying pizza with bitcoin. Where ? Is it an anecdote ?

https://www.coindesk.com/consensus-magazine/2023/05/22/celeb...


> Buying the pizza with bitcoin was the first proof.

While some thing might work, it doesn’t imply that it is practical.


Perhaps I'm just a bit older than some of the folks here but I'm referring to this: https://www.coindesk.com/consensus-magazine/2023/05/22/celeb...


I've seen this but it's only the people outside tech who were into crypto, and who were invested in GameStop, who min-max credit card offers or air miles, etc. It's those who are into the latest hustle-culture fad.

My family who are mostly non-technical have heard of ChatGPT but couldn't tell you what it is or does.

In tech I see a wide spectrum of skepticism, with many people using it well and getting a lot of value out of it, and many remaining skeptical or having not found a good way to integrate it into their workflow yet.


> it's only the people outside tech who were into crypto

This is obviously not true. There was a lot of skepticism within the tech world but also a LOT of hype and hubris, even coming from some of the most important (even if not most credible) voices in the industry like Andreessen.


With the context of the parent comment, the way to read my point was:

> of the people outside tech, it was only the people who were into crypto (etc) who are into AI.

I hoped this would be clear, especially given that the next paragraph talks about my thoughts on people in tech. Apologies if it wasn't clear enough.


I see now! My mis-read. I thought the latter paragraph was just regarding skepticism of AI.


The difference between AI and crypto is that AI does (some) existing work more efficiently (and is poised to become more efficient), while crypto does existing work less efficiently (and its inefficiencies are inextricable).


I was on a shuttle bus in the early hours of this morning in regional SE Queensland and they had some Triple-M talkback show talking about it. Someone was talking about how their kid had told them that they can just get GPT to write responses to emails for them.

To their credit, they said, "wtf would we want to do that?!"


> serious (not only senior) IT professionals don't really.

Which part of the industry are you working in?

I work for Google and hear a lot of talk about LLMs from my serious colleagues.


High Frequency Trading, and my circle is mostly platform engineering people, cloud engineers, automatization.


That's a bit curious. I was supposing that all the hedge funds should have incorporated LLMs in their models by now, since it should give them such a huge advantage. Is it not so?


I don't think there is a basis for this reasoning

LLMs don't know what's right automagically, a model fine tuned by a hedge fund is definitely better than just an LLM's hallucinating something


Of course I mean LLMs fine-tuned by the hedge funds, not something off-the-shelf. My reasoning is that sentiment analysis and similar techniques have been used for a while, and LLMs raise them to the next level, so they are bound to be beneficial.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: