Yes. I think several clients have open feature requests. The Stalwart documentation has a list of projects. There is one command line client as of now. But I’m not switching to a cli mail client or proprietary software, so I’ve postponed it. We’ll see where this is going.
I welcome these modernization attempts. Though in theory I’d love to see someone revamp email in its entirety, add encryption, signatures, chat and crack down on spam and phishing. Not sure if that’s ever going to happen, but that’d be great, too.








They’re fairly transparent with everything. You could call it open-source. And it’s supposed to be the first large model which complies with the EU AI regulations. They try to make an effort not to include too much material from people who objected to AI use, and there is a way to opt out. They did not deliberately pirate books like Meta did. But with that said, it’s still AI. Training needs a lot of water and energy. Though I think this Alps supercomputing center tries to be carbon-neutral and use Swiss hydropower. Whatever that means in practice. Opt-out is probably the best thing we can do but it’s not exactly consent from the authors of the training material. And I don’t think there is a way to compensate them. And AI can of course be problematic once used, so that depends on what people do with it.
I’d call it more ethical (than other models). But I don’t see how it’d be strictly ethical in absolute terms. Looks to me like an effort to improve, maybe substantially on what other people did. But there’s still a lot of problematic aspects of AI which scientists and society hasn’t addressed yet.