Shout out to FDroid for being awesome. But realistically it’s not going to cover all the apps you’ll ever need.
Shout out to FDroid for being awesome. But realistically it’s not going to cover all the apps you’ll ever need.
My concern is with malware that exploits the software stack though, and those links pertain to scams that exploit human nature. Hence they don’t really support the argument that the iOS/android stack is more/less secure.
Scams that exploit human nature are an inevitable part of being online and there is no foolproof way to prevent them. I never said that either company was better or worse at reactive removal.
Scam apps require user interaction to achieve their goals. They largely aren’t doing anything that the user doesn’t allow them to do. So while I would always advocate swift removal, the onus is on me to protect myself rather than the store itself.
The links I posted related to software on the play store exploiting aspects of the Android stack to surreptitiously perform tasks without the users knowledge. If somebody downloads one of those apps they are able to do things that the user isn’t aware of and never allows. This is the kind of exploitation that is preventable by thorough fuzzing. And this is the kind of threat that iOS does a fantastic job at protecting against.
Put it this way: I can safely download any app from the Apple App Store knowing that it is highly unlikely it will fuck with my device. I know that if it does it’ll probably be noteworthy enough to make the news. I can’t say the same for the Google Play Store.
Except that somehow it just keeps happening to google:
Whatever Apple is doing, you just don’t see this level of compromise on iOS. It’s not just that the google store is no better, it seems to be so much worse.
The Wileyfox Swift was a rebadged device from an ODM, and at the time was quite well known and liked because the company was UK based and touted responsive local support. The hardware was good and the software support certainly no worse than any other at the time. The frustration of using it came from the problems inherent in the android stack, not the device itself.
I wanted to use android and I tried my best to make it a rational choice. The issues I encountered applied all the same to phones many times the price I paid, hence making iOS my only option. All these years later most of those core issues persist.
No I did not and the Swift was (at the time) an official lineage target. It performed well, but the amount of work and effort it took to attain and maintain that performance was simply unacceptable to me. I like the concept of Android and I like how open it is but that doesn’t mean I’m going to be an apologist for it’s shortcomings. Of which there are many. I would love to be able to justify using an android device but it is just not a rational choice for me. And it would seem many others.
Denigrating something is by definition unfair criticism - and I don’t think even the most evangelical of android fans can support the mediocre manufacturer support and security history of the platform.
I tried a full phone cycle on Android. A Wileyfox Swift. I stuck with it for 4 years. I’ve dealt with a handful of Android tablets. I still have to wrangle Android on fire sticks.
I love to mess around with electronics but holyshit never again. These are devices that need to work and perform, I got so damn tired messing with Lineage and TWRP - the alternative being the zero updates from the manufacturer. The whole stack is a janky mess, and a moving target in terms of security and performance. Flagship phones that might stay current and perform well for a couple of years? Wtf?
So many android apps are dogshit. There’s no minimum bar to entry. Malicious apps sneak onto the play store. Out of date apps linger around.
My phone is not a project piece. It’s an essential device. Apple gives me a stringently vetted App Store, strong privacy controls, dependable hardware and performance. They expose the settings that I need and optimise everything else. My iPhone works and does it’s job with far less painful maintenance. I’m definitely willing to trade some freedom for that utility.
Not only that but Apple hasn’t tried to drm the open web lately. Are you sure this is consumerism and peer pressure? And not a dogshit software stack with poor performance, security and hardware driving away the users who are most engaged with their devices?
Do I care what phone you’re using? No. But I think bullshit click bait articles which effectively denigrate an entire demographic for the sake of instigating a tired back and forth about apples vs oranges should stay on the other side of the fucking paywall.
Hasn’t been an issue for me. HA would only be depending on Opnsense for a DHCP lease so assuming you have reasonable lease times it’ll just pick up where it left off.
Without checking I would imagine you could just set a delay for the HA container to make sure opnsense can start first, if it does become an issue.
I use an N5105 generic mini pc running proxmox and opnsense. You can get them fairly cheaply from Aliexpress. They’re particularly low power and come with 4-6 gigabit network ports. I have two containers, the second of which hosts my Home Assistant instance. As an added bonus they often don’t have a fan.
For wifi I use Ubiquity wifi 6 Lite APs with the controller running under home assistant.
You can ignore the windows machine unless it’s using nfs, it’s not relevant.
Your screenshot suggests my guess was incorrect because you do not have any authorised Networks or Hosts defined.
Even so if it was me I would correctly configure authorised hosts or authorised networks just to rule it out, as it neatly explains why it works on one container but not another. Does the clone have the same IP by any chance?
The only other thing I can think for you to try is to set maproot user/group to root/wheel and see if that helps but it’s just a shot in the dark.
The two docker containers can access the share, but the new proxmox container can’t?
The new proxmox container will have a different IP. My guess would be that the IP of the docker host is permitted to access the nfs share but the ip of the new proxmox container is not.
To test, you can allow access from your entire lan subnet (192.168.1.1/24)
Edit: For reference see: https://www.truenas.com/docs/scale/scaletutorials/shares/addingnfsshares/#adding-nfs-share-network-and-hosts
In particular: If you want to enter allowed systems, click Add to the right of Add hosts. Enter a host name or IP address to allow that system access to the NFS share. Click Add for each allowed system you want to define. Defining authorized systems restricts access to all other systems. Press the X to delete the field and allow all systems access to the share.
I just switched to Librewolf from Brave because fuck Chromium and fuck Google.
Did I trust brave as a Browser? Yes, at least enough to use it as my daily driver. Because the worst thing they’ve done that I’m aware of is add affiliate links. When somebody noticed they didn’t bullshit their way out of it, they apologised and fixed it:
https://www.theverge.com/2020/6/8/21283769/brave-browser-affiliate-links-crypto-privacy-ceo-apology
There is a lot of hand wringing about various aspects of their browser and the personality of their CEO but the browser is open source and the code is watched by a lot of eyeballs. If they went truly bad somebody is going to notice quickly.
They are a company and have to find a way to make money but they never once forced anything on me. It was always relatively simple to disable anything they added that I didn’t want and they never added anything surreptitiously. Unlike Firefox: https://medium.com/@neothefox/firefox-installs-add-ons-into-your-browser-without-consent-again-d3e2c8e08587 and https://techcrunch.com/2017/12/15/mozillas-mr-robot-promo-backfires-after-it-installs-firefox-extension-without-permission/
I know it’s not going to be popular to criticise Firefox and I understand it’s importance as the last true alternative to chromium but my point is that none of the options are whiter than white. And in so far as the available options, Brave and Firefox stand head and shoulders above the rest.
I imagine product managers at Google and Microsoft would be very happy to see us shitting on one of the few open source browsers to gain any kind of traction, instead of focusing our outrage towards their behaviour.
Are you sure this wasn’t written by some poor intern under some form of duress?
“With the download tray, you can see a list of all your downloads from the past 24 hours in any browser window, not just the one in which you originally downloaded a file. The tray also offers in-line options to open the folder a download is in, cancel a download, retry a download should it fail for any reason, and pause/resume downloads.”
If your only goal is working https then as the other comment correctly suggests you can do DNS-01 authentication with Let’s Encrypt + Certbot + Some brand of dyndns
However the other comment is incorrect in stating that you need to expose a HTTP server. This method means you don’t need to expose anything. For instance if you do it with HA:
https://github.com/home-assistant/addons/blob/master/letsencrypt/DOCS.md
Certbot uses the API of your DDNS provider to authenticate the cert request by adding a txt record and then pulls the cert. No proxies no exposed servers and no fuss. Point the A record at your Rfc1918 IP.
You can then configure your DNS to keep serving cached responses. I think though that ssl will still be broken while your connection is down but you will be able to access your services.
Edit to add: I don’t understand why so many of the HTTPS tutorials are so complicated and so focused on adding a proxy into the mix even when remote access isn’t the target.
Cert bot is a shell script. It asks the Lets Encrypt api for a secret key. It adds the key as a txt record on a subdomain of the domain you want a certificate for. Let’s encrypt confirms the key is there and spits out a cert. You add the cert to whatever server it belongs to, or ideally Certbot does that for you. That’s it, working https. And all you have to expose is the rfc1918 address. This, to me at least, is preferable to proxies and exposed servers.
Not that I don’t love Ubi but OPNsense and pfsense will also handle failover:
https://docs.opnsense.org/manual/multiwan.html
This is also possible within Linux, Windows and *BSD by just adding both possible routes and weighting them accordingly:
https://serverfault.com/questions/226530/get-linux-to-change-default-route-if-one-path-goes-down
Yes. Depending on your network configuration you could consider using cellular data as a backup form of connectivity.
A long time ago I used something like sockd to run a local proxy and then send that data to my personal remote proxy server over port 80, something like https://win2socks.com/ I think
Maybe there’s something better than socks these days.
Back then it worked pretty well, but I don’t think they were doing DPI. They (admin guys) did seem to notice large file transfers and seemed to be killing them manually.
I would assume most places these days will collect net flow data at least, so while https will protect the contents, they will be able to see the potentially unusual amount of data moving back and forth to your proxies IP.
I would suggest at least using a VPS to hide your schools IP address from the irc servers. And you may be in serious trouble if you get caught. If you’re in the UK you’re going to be risking jail time, and speaking from personal experience, they take this shit seriously.
So maybe just set up a personal hotspot.