I've found ChatGPT to be very helpful when I have a phrase on the tip of my tongue or I'm trying to find a nuanced synonym for something. Better results than a thesaurus!
I bet it won't be too long before someone makes a LLM that's specifically geared toward being a conversation partner for language learning.
This made me curious about a couple of languages for which it's been hard to find materials to practice at the right level; I had also read some about how heavily the training data is skewed in terms of languages (which makes sense). One of the two is supported right now (and wow you were not kidding about how much it is inclined to give! information!), and one is not. But that's still something.
The thing with most of these models (at least so far) seems to be that when you ask a question, you don't necessarily want to incorporate every insanely wrong answer as an input, and unless you have a true *quality control* on the inputs, that's what you're doing.
I can see incredible potential for AI language bots to, for instance, provide really good language-specific translations of very technical instructional language or concepts (i.e. 'how do i put together this cursed ikea flortporktorstrophe). Speaking as what some folks call a "Technical Writer," I can see the appeal in using the proverbial hand calculator to remove some of the drudgery from just taking a bunch of numeric data tables and translating them into functional prose descriptions of What's Happening. That would be super cool!
But asking it questions that don't have a very specific, very factual answer, which is explicitly curated & delineated in the underlying training dataset...this, I think, is generally not going to go anywhere useful for now.
I've found ChatGPT to be very helpful when I have a phrase on the tip of my tongue or I'm trying to find a nuanced synonym for something. Better results than a thesaurus!
I bet it won't be too long before someone makes a LLM that's specifically geared toward being a conversation partner for language learning.
This made me curious about a couple of languages for which it's been hard to find materials to practice at the right level; I had also read some about how heavily the training data is skewed in terms of languages (which makes sense). One of the two is supported right now (and wow you were not kidding about how much it is inclined to give! information!), and one is not. But that's still something.
I tried Yiddish also and Bard more or less refused to engage, which as you say makes sense.
The thing with most of these models (at least so far) seems to be that when you ask a question, you don't necessarily want to incorporate every insanely wrong answer as an input, and unless you have a true *quality control* on the inputs, that's what you're doing.
I can see incredible potential for AI language bots to, for instance, provide really good language-specific translations of very technical instructional language or concepts (i.e. 'how do i put together this cursed ikea flortporktorstrophe). Speaking as what some folks call a "Technical Writer," I can see the appeal in using the proverbial hand calculator to remove some of the drudgery from just taking a bunch of numeric data tables and translating them into functional prose descriptions of What's Happening. That would be super cool!
But asking it questions that don't have a very specific, very factual answer, which is explicitly curated & delineated in the underlying training dataset...this, I think, is generally not going to go anywhere useful for now.