• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • I am an LLM researcher at MIT, and hopefully this will help.

    As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

    The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

    This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

    This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

    Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

    This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

    From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

    —-

    +more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.




  • I’m convinced that we should use the same requirements to fly an airplane as driving a car.

    As a pilot, there are several items I need to log on regular intervals to remain proficient so that I can continue to fly with passengersor fly under certain conditions. The biggest one being the need for a Flight Review every two years.

    If we did the bare minimum and implemented a Driving Review every two years, our roads would be a lot safer, and a lot less people would die. If people cared as much about driving deaths as they did flying deaths, the world would be a much better place.



  • This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it’s relatively simple to create your own model.

    The ControlNet paper is here: https://arxiv.org/pdf/2302.05543.pdf

    I implemented this paper back in March. It’s as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author’s created a new model architecture that can better control the output of a diffusion model.





  • I don’t go around using that word because of how many people find it disrespectful. But, and I ask this out of honest curiousity, why is it offensive in the first place?

    I see it as synonymous with ‘idiot’ or ‘stupid’ when used colloquially. The argument that it’s a medical term doesn’t really hold as ‘idiot’ and ‘moron’ are also medical terms that refer to a lacking of intellectual acuity. In many ways ‘retarded’ has the same meaning both colloquially and medically. To be mentally retarded is to be mentally slowed or lacking that similar mental acuity that ‘idiot’ or ‘moron’ convey.

    Retarded just means slow and it’s a perfectly apt description. Where I think people get confused is when retardation is linked with a specific attribute like physical retardation or emotional retardation, those convey very different meanings.

    I’m not saying that we should start using it again, but that I find it odd how society has latched onto a very specific word and labelled it as bad in the matter of a decade. At the end of the day, any word that can be used to insult or demean, is rude. It’s not the word being used, it’s what is meant by them. The term 'Cis-gender ’ is also being used in a highly exclusionary way and often times is conveyed as an insult. However, it’s real meaning is not insulting in the least.


  • CodeInvasion@sh.itjust.workstoMemes@lemmy.mlAny time now.
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    A lot of people in this situation have already tried everything, and they are complaining on the internet because they feel like there is nothing else they can do. Most people won’t even complain about it with their friends. I know I didn’t until one day I got drunk camping. That was the trigger for me to do something about it, because it was clearly eating away at me.

    I had many conversations with my wife following that, and how much our sexless marriage really bothered me, but that I am still completely and totally in love with her. We don’t have kids, so there’s not anything keeping us together. We agreed to an open relationship with our rules, and each of us has a veto which could stop the arrangement at any time. We are still completely committed to one another and love spending time together. Things haven’t been better, now when I think about being with another woman, I don’t need to feel guilty about it.

    And I think this is the natural state of humanity. Everyone needs a long-term partner for stability and to care for one another, but people always felt the need to cheat as well. Monogamy is as core to our DNA and survival as is Adultery. But because we’ve talked about it, there is no sneaking around necessary, no lying. We are completely honest with each other on everything.


  • The same argument could be said for an apartment building too. We need to collectively realize that Single Family Houses are a luxury that most of us will never see in our lifetimes. Our grandparents were able to enjoy them at low prices because the US had half the population it does today.

    Restrictive building codes that only permit building SFH is the cause of our housing shortage and not short term rentals that consist of 0.2%-1% of all dwellings.



  • At it’s core, this is the root cause of the housing crisis. We do not have enough supply. The amount of Airbnb’s that exist is extremely miniscule and the targeting of Airbnbs is an intentional distraction tactic.

    Depending on the source, 1% to 0.2% of all dwellings are listed for short-term rental in the US. That’s crazy small and has very little impact on housing prices overall.

    The fact of the matter is that Single Family Homes are an incredible luxury that our parents and grandparents were able to enjoy when the country had half as many people as it does now. It is no longer sustainable to expect a SFH in the US, and the American public continuing to cling to that dream and restrictive zoning practices are really what is driving up prices.

    If you want an affordable house you will need to move to a rural area where land and labor are cheap. If you want to live near any reasonably sized city, you better be upper middle class to even think about buying a SFH.


  • We used to run an Airbnb out of the spare rooms in our house. It was very cheaply priced, and we were always booked out for months. Super host status and everything. It was clear most people just look at the price and never the description or rules. We rented two bedrooms with a shared bathroom, and the amount of complaints we received because they had to share a bathroom with someone else was obnoxious.

    We closed up shop during the pandemic and just used those rooms as guest rooms instead. In hindsight it wasn’t worth the hassle of dealing with self-centered people who expect an experience superior to that hotels at a quarter the price. We also had some fantastic guests that we loved having stay with us, but the few bad experiences dramatically overshadowed all the good decent people.

    Airbnb’s are so shitty today because their customers are just as equally shitty on aggregate.