

Well no. Initially i had the storage set on the VM where its running. I wasn’t expecting it to download all that data.
Well no. Initially i had the storage set on the VM where its running. I wasn’t expecting it to download all that data.
OK, so maybe I didn’t explain myself. What I meant was that I would like resilience so that if one server goes down, I’ve got the other to quickly fireup. Only problem is that slave sever has a smaller pool, so I can’t replicate the whole pool of master server.
Well, its ticked but not working then because I found duplicate links. Maybe it only works if you try to store the same link twice but it doesn’t work on the imported bookmarks
I was using floccus, but what is the point of saving bookmarks twice, once in linkwarden and once in browser
Looks very interesting. But as others noted, still too young, only two releases in 3 months and 1 person. Certainly to keep an eye out. The MIT licence worries me too. I always add the licence in the criteria ;-)
absolutely, none of that is going past my router.
Interestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don’t like working off NFS share so we’ll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they’ll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.
The first rule of containers is that you do not store any data in containers.
Do you mean they should be bind mounts? From here, a bind mount should look like this:
version: ‘3.8’
services: my_container: image: my_image:latest volumes: - /path/on/host:/path/in/container
So referring to my Firefly compose above, then I shoudl simply be able to copy over the /var/www/html/storage/upload
for the main app data and the database stored in here /var/lib/mysql
can just be copied over? but then why does my local folder not have any strorage/upload
folders?
user@vm101:/var/www/html$ ls index.html
Here is my docker compose file below. I think I used the standard file that the developer ships, simply because I was keen to get firefly going without fully understanding the complexity of docker storage in volumes.
The Firefly III Data Importer will ask you for the Firefly III URL and a "Client ID".
# You can generate the Client ID at http://localhost/profile (after registering)
# The Firefly III URL is: http://app:8080/
#
# Other URL's will give 500 | Server Error
#
services:
app:
image: fireflyiii/core:latest
hostname: app
container_name: firefly_iii_core
networks:
- firefly_iii
restart: always
volumes:
- firefly_iii_upload:/var/www/html/storage/upload
env_file: .env
ports:
- '84:8080'
depends_on:
- db
db:
image: mariadb:lts
hostname: db
container_name: firefly_iii_db
networks:
- firefly_iii
restart: always
env_file: .db.env
volumes:
- firefly_iii_db:/var/lib/mysql
importer:
image: fireflyiii/data-importer:latest
hostname: importer
restart: always
container_name: firefly_iii_importer
networks:
- firefly_iii
ports:
- '81:8080'
depends_on:
- app
env_file: .importer.env
cron:
#
# To make this work, set STATIC_CRON_TOKEN in your .env file or as an environment variable and replace REPLACEME below
# The STATIC_CRON_TOKEN must be *exactly* 32 characters long
#
image: alpine
container_name: firefly_iii_cron
restart: always
command: sh -c "echo \"0 3 * * * wget -qO- http://app:8080/api/v1/cron/XTrhfJh9crQGfGst0OxoU7BCRD9VepYb;echo/" | crontab - && crond -f -L /dev/stdout"
networks:
- firefly_iii
volumes:
firefly_iii_upload:
firefly_iii_db:
networks:
firefly_iii:
driver: bridge
Documentation is impressive. I need to take a look. Thanks for sharing.
Yeah, seems like the registration to IPinfo is required so that you can download a token which then allow pfBlockerNG to download the ASN database. I’ve just registered to IPinfo and it seems like (unless its a false alarm) that it now works.
However, I’ve also learned that all the ARUB ASNs I had didn’t include the SMPTS server I was using.
Basically, I did an nslookup smtps.aruba.it, got the IP and then did a search for the ASN using Team Cymru IP to ASN Lookup v1.0 here https://asn.cymru.com/cgi-bin/whois.cgi to find the ASN. I then copied the ASN in the WAN_EGRESS list and bingo its working.
I agree. In principle Nextcloud is a great idea and project but it has a lot of issues that make maintaining a pain. I had it for over 2 years and at every update it was painfull. I gave up and moved to Syncthing+Radicale. Is there something I miss? Yes, the ability to share as Syncthing doesn’t allow sharing.
These look really interesting
Good shout. I hadn’t thought of the hotspot option although I wanted to relegate WhatsApp to the burner phone as I just use it for kids school.
The problem is that many banks are using mobile phones 2FA devices and they don’t allow other means. I asked why I couldn’t go back to SMS as 2FA and they said that they deem it to be insecure.
Yes, I’ve been thinking about a burner phone but difficult to find pay as you go sims these days here. You end up in some form of contract. There you go … You want privacy, you have to pay for it. Wtf! We’ll soon be screwed altogether. They’ll soon ban non-stock ROMs too … Not long till this happens …
Surely that can’t be true if you live in the EU. GDPR would screw them.
There are a couple of dependencies that I will find hard to get rid of like bank authentication. Its getting really hard to find banks that will allow 2FA via methods that don’t require the app.
Molly.im does not have a lot of documentation. Does it equally rely on a centralised server? If it does, then surely one of the downsides is that there probably isn’t a huge foundation behind it ensuring the bills are paid, etc. Or is it that Molly is piggy backing on Signal servers? And is the Signal Foundation happy to be have Molly users using its services? How long before Signal Foundation kicks Molly users off it servers?
Also I note u can download two different version: one with Google blobs and one without. What compromise do I have to make if I choose the Google version?
Mate, it was a sarcastic statement 😉