you said at one point that it is a relic that used to ease the pronunciation but not anymore. Is that a statement you agree to? Because if so, when did it stop to do that and turned into a relic?
Yes, I agree with this statement, and I’ve already answered this question, but here it is again:
It stopped easing pronunciation as soon as the phonotactic constraints of the language changed to once more allow the sequence that was previously disallowed.
That’s the answer.
And, once again, we can test for when this happens by looking for apparent exceptions to the sound rule in question (introduced later by borrowing, analogy, or subsequent regular sound change).
Once apparent exceptions appear, that indicates to us that the phonotactic constraints have changed, and that the sequence is once again being allowed in the language. At that point “easing pronunciation” no longer makes sense as a descriptor of the alternation (as in the case with the a/an alternation).
- This does not mean that it is a regular sound shift. It never was. It always only effected this one word
This is empirically incorrect. It also affected my/mine in exactly the same environment, and at the same time (12th cent. to 14th cent.) because, as mentioned, sound change is regular and exceptionless in its environment.
Now, let me ask you a question.
About the a/an alternation, you say that “in every instance it occurs, it demonstrably eases the pronunciation”, but you never say how it eases the pronunciation, or what that even means to you. I, on the other hand, have given you thorough explanations and theoretical underpinnings for my position at every turn.
So, if it “demonstrably eases the pronunciation”, then please do demonstate it. What’s the strict, rigorous, definition of “easing pronunciation” (or whatever we want to call this) that you’re using here, and how is it useful? That is, how does it make useful predictions about the data?
Because currently, your definition feels like something like “it feels better to speakers” or some equally un-useful metric. If “it feels better to speakers” is your definition (which I’m not saying it is - that’s why I asked), then “I would have eaten the apple” would have “easier pronunciation” than “I eaten the apple”, and I think that’s a bad result for your position.
My definition would probably be something like: “a process that leads to a repair of some sort (by addition, deletion, etc.) to avoid a sound string that is disallowed by a language’s phonotactics”.
No other process would be easing pronunciation, because all other strings would be allowed by the language’s phonotactics.
And, since the sound sequence represented by the “a/an” alternation is clearly allowed elsewhere by English’s phonotactics, this process cannot, by definition, be easing pronunciation.
If your theoretical framework doesn’t allow something that happens, isn’t that rather bad for the framework than for reality?
I suppose that depends on one’s perspective, but since you’re a functionalist, it certainly makes sense that you’d see it that way.
If one’s framework doesn’t allow something that happens, that’s a good thing, because it means that the model is falsifiable, and therefore scientific. Since, as you correctly stated, all models are wrong, it should be the case that a good framework doesn’t allow something that happens if you’re actually doing science.
This is exactly my problem with Role and Reference Grammar, and functionalism in general - it’s not falsifiable. Everything they do is descriptive - they just restate their data a dozen times in a dozen different ways and call it a day, without actually explaining anything. Nothing can prove them wrong, because they never actually say anything in the first place.
Of course they would want their models to be able to account for literally everything that could possibly happen, because they need to have room to describe it, whatever it is, and they don’t care about making useful predictions.
Unfortunately, a model that is powerful enough to account for everything is, of course, also too powerful to actually do anything useful.
This is exactly why generative models are so specific and constrained - we want our models to be proven wrong by new data, so that we can revise them into better, more accurate models.
Luckily for me, though, none of the data you’ve brought up in this comment comes anywhere close to creating a problem for either the regularity of sound change, or generative linguistics in general.
A bit about me in return, I suppose.
I received my PhD in Linguistics in the mid-late 2000s focusing on the core subfields (generative phonology, morphology, and syntax) and historical linguistics, and then worked as an assistant professor for around five years, teaching, publishing, and supervising theses, before finally leaving the field for industry about ten years ago (though I try to stay relatively current on research).
Geoff is fine. I’ve brought up his videos in some sociolinguistics discussions I’ve had recently, but he’s no substitute for peer-reviewed research, and he’s a bit too light on theory to appeal to me casually. Too much of the “what”, too little of the “why”.