Hello! 👋

  • 0 Posts
  • 105 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • I think most ppl are fine with it if it is only once and you seem genuinely sad about it. I am fine with ppl forgetting we even met I am even playing along pretending it is the first time we meet, so far has it only been ppl I met maybe twice in my life(except one person). I have only met one person who actually got sad when I forgot her name. She asked if I was not as happy as she was to see me… That hurt, I remembered her tho! But not her name.

    I think it is worse when I have to introduce myself everytime we meet and it has been over 3 times… One dude he never remembered me until the 5th time then he said "heey we have met before right?! " he even looked genuinely happy and I felt “finally we can stop pretending” but then the next time we met he introduced him self again… I remember you Felix!








  • Same (proton and ublocker), but I have also found that some web pages cares what browser you use if you are on proton. If I use chrome then they may just do the verification when you wait 3 seconds but with Firefox and proton (not without proton) do I get a lot of captcha sometimes even after each other just to make triple sure I am not a bot… or even get blocked entirely…




  • Kuma@lemmy.worldtoMemes@lemmy.mlFYI
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    I don’t know about other countries, but in the nordic countries was it not a day to get present until after 1600 so there may have been a time with less visible capitalism. The presents started as a gift to ppl in need and later became a thing you give to family and friends. That day wasn’t even a Christian day at first until we converted. It was called midwinter(no presents, just a celebration). But like the person who responded to you before me said; capitalism has been around a lot longer.



  • I didn’t known that it was seen as a bad thing by some devs. At my company (consulting) are we saying that we failed if we spin up a full server. we do infra as code very often and that wouldn’t be as easy or possible as with serverless. It is easier to monitor what cost money (need more performance) that way too. I have seen some wish to get into the server, you don’t have to, that is the thing, all your configurations are done with in a portal like azure, the only times (extremely few) i have went into a serverless is when i have to check the apps configuration for a very old app that may have been deployed manually (get surprised every time) and i don’t know the values that need to be set and there has been times logging is done to disk instead of using application insight. But thankfully these are exceptions not the norm. It is usually applications that was a fire and forget project and have always worked until they want some new functionality.



  • Ooof, i am glad you don’t have to do it anymore. I have a customer who is in the same situation. The company with the site were also ok with it (it was a running joke “this [bot] is our fastest user”) but it was very sketchy because you had to login as someone to run the bot. thankfully did they always tell us when they made changes so we never had to be surprised.


  • Kuma@lemmy.worldtoLemmy Shitpost@lemmy.worldChad scraper
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    9 months ago

    This kinda reminds me of pirating vs paying. Using api = you know it will always be the same structure and you will get the data you asked for. Otherwise you will be notified unless they version their api. There is usual good documentation. You can always ask for help.

    Scraping = you need to scout the whole website yourself. you need to keep up to date with the the websites structure and to make sure they haven’t added ways to block bots (scraping). Error handling is a lot more intense on your end, like missing content, hidden content, query for data. the website may not follow the same standards/structuree throughout the website so you need to have checks for when to use x to get y. The data may need multiple request because they do not show for example all the user settings on one page but in an api call they would or it is a ajax page and you need to run Javascript scripts and click on buttons that may change id, class or text info and they may load data when you do x with Javascript so you need to emulate the webpage.

    So my guess is that scraping is used most often when you only need to fetch simple data structures and you are fine with cleaning up the data afterwards. Like all the text/images on a page, checking if a page has been updated or just save the whole page like wayback machine.