• 0 Posts
  • 124 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle
  • “Why” implies an underlying ontology. Maybe there is something underneath it but it’s as far as it goes down as far as we currently know. If we don’t at least tentatively accept that our current most fundamental theories are the fundamental ontology of nature, at least as far as we currently know, then we can never believe anything about nature at all, because it would be an infinite regress. Every time we discover a new theory we can ask “well why does it work like that?” and so it would be impossible to actually believe anything about nature.



  • There are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.

    You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.

    In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.

    The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.

    This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.

    — Dmitry Blokhintsev

    Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.

    A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.


  • I got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.

    These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.

    If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.

    You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.


  • Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.

    This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.

    The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.

    Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.



  • Mathematics is just a language to describe patterns we observe in the world. It really is not fundamentally more different from English or Chinese, it is just more precise so there is less ambiguity as to what is actually being claimed, so if someone makes a logical argument with the mathematics, they cannot use vague buzzwords with unclear meaning disallowing it from it actually being tested.

    Mathematics just is a language that forces you to have extreme clarity, but it is still ultimately just a language all the same. Its perfect consistency hardly matters. What matters is that you can describe patterns in the world with it and use it to identify those patterns in a particular context. If the language has some sort of inconsistency that disallows it from being useful in a particular context, then you can just construct a different language that is more useful in that context.

    It’s of course, preferable that it is more consistent than not so it is applicable to as many contexts as possible without having to change up the language, but absolute perfect pure consistency is not necessarily either.


  • Speed of light limitation. Andromeda is 2.5 million light years away. Even if someone debunks special relativity and finds you could go faster than light, you would be moving so fast relative to cosmic dust particles that it would destroy the ship. So, either way, you cannot practically go faster than the speed of light.

    The only way we could have intergalactic travel is a one-way trip that humanity here on earth would be long gone by the time it reached its destination so we could never know if it succeeded or not.


  • Historically they often actually have the reverse effect.

    Sanctions aren’t subtle, they aren’t some sneaky way of hurting a country and so the people blame the government and try to overthrow it. They are about as subtle as bombing a country then blaming the government. Everyone who lives there sees directly the impacts of the sanctions and knows the cause is the foreign power. When a foreign power is laying siege on a country, then it often has the effect of strengthening people’s support for the government. Even the government’s flaws can be overlooked because they can point to the foreign country’s actions to blame.

    Indeed, North Korea is probably the most sanctioned country in history yet is also one of the most stable countries on the planet.

    I thought it was a bit amusing when Russia seized Crimea and the western world’s brilliant response was to sanction Crimea as well as to shut down the water supply going to Crimea, which Russia responded by building one of the largest bridges in Europe to facilitate trade between Russia and Crimea as well as investing heavily into building out new water infrastructure.

    If a foreign country is trying to starve you, and the other country is clearly investing a lot of money into trying to help you… who do you think you are winning the favor of with such a policy?

    For some reason the western mind cannot comprehend this. They constantly insist that the western world needs to lay economic siege on all the countries not aligned with it and when someone points out that this is just making people of those countries hate the western world and want nothing to do with them and strengthening the resolve of their own governments, they just deflect by calling you some sort of “apologist” or whatever.

    Indeed, during the Cuban Thaw when Obama lifted some sanctions, Obama became rather popular in Cuba, to the point that his approval ratings at times even surpassed that of Fidel, and Cuba started to implement reforms to allow for further economic cooperation with US government and US businesses. They were very happy to become an ally of the US, but then suddenly Democrats and Republicans decided to collectively do a 180 u-turn and abandon all of that and destroy all the good will that have built up.

    But the people of Cuba are not going to capitulate because the government is actually popular, as US internal documents constantly admits to, and that popularity will only be furthered by the increased blockade. US is just going to create a North Korean style scenario off the coast of the US.


  • People don’t believe him because there is no reason to take his view on this issue seriously. Just because a person is smart in one area doesn’t mean they are a genius in all areas. There is an old essay from the 1800s called “Natural Science and the Spirit World” where the author takes note of a strange phenomena of otherwise brilliant scientists being very nutty in other areas, one example being Alfred Russel Wallace who codiscovered evolution by natural selection but also believed he could communicate with and photograph ghosts from dead people.

    People don’t take Penrose’s theory on consciousness seriously because it is not based on any reasonable arguments at all. Penrose’s argument is so bizarre that it is amazing even Penrose takes it seriously. His argument is basically just:

    (P1) There are certain problems that the answer cannot be computed. (P2) Humans can believe in an answer anyways. (C1) Therefore, humans can believe things that cannot be computed. (P3) The outcome of quantum experiments is fundamentally random. (C2) Therefore, the outcome of quantum experiments cannot be computed. (C3) Therefore, the human consciousness must be related to quantum mechanics.

    He then goes out with this preconception to desperately search for any evidence that the brain is a quantum mechanical system, even though most physicists don’t take this seriously because quantum effects don’t scale up easily for massive objects, warm objects, and for objects not isolated from their environment, which all three of those things apply to the human brain.

    In his desperate search to grasp onto anything, he has found very loose evidence that quantum effects might be scaled up a little bit inside of microtubules, and the one paper showing this maybe as a possibility which hasn’t even been repeated has been plastered everywhere by his team as proof they were right, but it ignores the obvious elephant in the room that microtubules are just structural and are found throughout the body and have little to do with information processing the in brain and thus little to do with consciousness.

    The argument he presents that motivates the whole thing also just makes no sense. The fact humans can choose to believe in things that cannot be computed doesn’t prove human decisions cannot be computed. It just means humans are capable of believing things that they have no good reason to believe… I mean, that is literally a problem with LLMs, sometimes called “hallucinations,” that they seem to just make things up and say it with confidence sometimes.

    The idea that it is impossible to have a computer reach conclusions that cannot be proven is silly, because the algorithm for it to settle on an answer to a question is not one that rigorously validates the truth of the answer but just activates a black box network of neurons and it settles on whatever answer the neural network outputs with the highest confidence level. If you ask an AI if the earth orbits the sun, and it says yes, it is not because it ran some complex proof at that moment and proved with certainty that the earth orbits the sun before it says it. That’s not how artificial intelligence works, so there is no reason to think that is how human intelligence would work either, and so there is no reason to expect that humans couldn’t believe things without absolute proof in the first place.



  • The reason quantum computers are theoretically faster is because of the non-separable nature of quantum systems.

    Imagine you have a classical computer where some logic gates flip bits randomly, and multi-bit logic gates could flip them randomly but in a correlated way. These kinds of computers exist and are called probabilistic computers and you can represent all the bits using a vector and the logic gates with matrices called stochastic matrices.

    The vector necessarily is non-separable, meaning, you cannot get the right predictions if you describe the statistics of the computer with a vector assigned to each p-bit separately, but must assign a single vector to all p-bits taken together. This is because the statistics can become correlated with each other, i.e. the statistics of one p-bit depends upon another, and thus if you describe them using separate vectors you will lose information about the correlations between the p-bits.

    The p-bit vector grows in complexity exponentially as you add more p-bits to the system (complexity = 2^N where N is the number of p-bits), even though the total states of all the p-bits only grows linearly (complexity = 2N). The reason for this is purely an epistemic one. The physical system only grows in complexity linearly, but because we are ignorant of the actual state of the system (2N), we have to consider all possible configurations of the system (2^N) over an infinite number of experiments.

    The exponential complexity arises from considering what physicists call an “ensemble” of individual systems. We are not considering the state of the physical system as it currently exists right now (which only has a complexity of 2N) precisely because we do not know the values of the p-bits, but we are instead considering a statistical distribution which represents repeating the same experiment an infinite number of times and distributing the results, and in such an ensemble the system would take every possible path and thus the ensemble has far more complexity (2^N).

    This is a classical computer with p-bits. What about a quantum computer with q-bits? It turns out that you can represent all of quantum mechanics simply by allowing probability theory to have negative numbers. If you introduce negative numbers, you get what are called quasi-probabilities, and this is enough to reproduce the logic of quantum mechanics.

    You can imagine that quantum computers consist of q-bits that can be either 0 or 1 and logic gates that randomly flip their states, but rather than representing the q-bit in terms of the probability of being 0 or 1, you can represent the qubit with four numbers, the first two associated with its probability of being 0 (summing them together gives you the real probability of 0) and the second two associated with its probability of being 1 (summing them together gives you the real probability of 1).

    Like normal probability theory, the numbers have to all add up to 1, being 100%, but because you have two numbers assigned to each state, you can have some quasi-probabilities be negative while the whole thing still adds up to 100%. (Note: we use two numbers instead of one to describe each state with quasi-probabilities because otherwise the introduction of negative numbers would break L1 normalization, which is a crucial feature to probability theory.)

    Indeed, with that simple modification, the rest of the theory just becomes normal probability theory, and you can do everything you would normally do in normal classical probability theory, such as build probability trees and whatever to predict the behavior of the system.

    However, this is where it gets interesting.

    As we said before, the exponential complexity of classical probability is assumed to merely something epistemic because we are considering an ensemble of systems, even though the physical system in reality only has linear complexity. Yet, it is possible to prove that the exponential complexity of a quasi-probabilistic system cannot be treated as epistemic. There is no classical system with linear complexity where an ensemble of that system will give you quasi-probabilistic behavior.

    As you add more q-bits to a quantum computer, its complexity grows exponentially in a way that is irreducible to linear complexity. In order for a classical computer to keep up, every time an additional q-bit is added, if you want to simulate it on a classical computer, you have to increase the number of bits in a way that grows exponentially. Even after 300 q-bits, that means the complexity would be 2^N = 2^300, which means the number of bits you would need to simulate it would exceed the number of atoms in the observable universe.

    This is what I mean by quantum systems being inherently “non-separable.” You cannot take an exponentially complex quantum system and imagine it as separable into an ensemble of many individual linearly complex systems. Even if it turns out that quantum mechanics is not fundamental and there are deeper deterministic dynamics, the deeper deterministic dynamics must still have exponential complexity for the physical state of the system.

    In practice, this increase in complexity does not mean you can always solve problems faster. The system might be more complex, but it requires clever algorithms to figure out how to actually translate that into problem solving, and currently there are only a handful of known algorithms you can significantly speed up with quantum computers.

    For reference: https://arxiv.org/abs/0711.4770


  • If you have a very noisy quantum communication channel, you can combine a second algorithm called quantum distillation with quantum teleportation to effectively bypass the quantum communication channel and send a qubit over a classical communication channel. That is the main utility I see for it. Basically, very useful for transmitting qubits over a noisy quantum network.


  • The people who named it “quantum teleportation” had in mind Star Trek teleporters which work by “scanning” the object, destroying it, and then beaming the scanned information to another location where it is then reconstructed.

    Quantum teleportation is basically an algorithm that performs a destructive measurement (kind of like “scanning”) of the quantum state of one qubit and then sends the information over a classical communication channel (could even be a beam if you wanted) to another party which can then use that information to reconstruct the quantum state on another qubit.

    The point is that there is still the “beaming” step, i.e. you still have to send the measurement information over a classical channel, which cannot exceed the speed of light.


  • Obvious answer is that the USA is the world’s largest economy while Russia is not, so if USA says “if you trade with Russia then you can’t trade with me” then most countries will happily accept ceasing trade with Russia to remain in the US market but if Russia says the same about the USA then people would just laugh and go trade with the USA.

    The only country that might have some leverage in sanctioning the US is China but China has historically had a “no allies” policy. Chinese leadership hate the idea of that because then they would feel obligated to defend them and defending another country is viewed very poorly in Chinese politics. They thus only ever form trade relations and never alliances, meaning if your country is attacked they have no obligation to you. Chinese politicians may verbally condemn the attack but they won’t do anything like sanctions or even provide their own military support in return.


  • I tried to encourage fellow Linux users to just encourage one distro. It doesn’t have to be a good distro, but just one the person is least likely to run into issues with and if they do, the most likely to be able to find solutions easily for their issues. Things like Ubuntu and Mint clearly fit the bill. They can then decide later if they want to change to a different one based on what they learn from using that one.

    No one listened to me, because everyone wants to recommend their personal favorite distro rather than what would lead to the least problems for the user and would be the easiest to use. A person who loves PopOS will insist the person must use PopOS. A person who loves Manjaro will insist that the person must use Manjaro. Linux users like so many different distros that this just means everyone recommends something different and just make it confusing.

    I gave up even bothering after awhile. Linux will never be big on desktop unless some corporation pushes a Linux-based desktop OS.



  • I use Debian as my daily driver for at least a decade, but I still recommend Mint because it has all the good things about Debian with extra.

    Debian developers just push out kernel updates without warning you about any possible system incompatibilities, so for example if you have an Nvidia GPU you might get a notificaton to “update” and a normie will likely press it only for the PC to boot to a black screen because Debian pushed out a kernel update that breaks compatibility with Nvidia drivers and does nothing to warn the user about it, and then a normie probably won’t know how to get out of the black screen to the TTY and roll back the update.

    I remember this happening before and I had to go to the reddit for /r/Debian and respond to all the people freaking out explaining to them how to fix their system and rollback the update.

    Operating systems like Ubuntu, Mint, PopOS, etc, will do more testing with their kernel before rolling it out to users. They also tend to have more up-to-date kernels. I had Debian on everything but my gaming PC that I had built recently because Debian 12 used such an old kernel that it wouldn’t support my motherboard hardware. This was a kernel-level issue and couldn’t be fixed just by installing a new driver. Normies are not going to want to compile their own kernel for their daily driver, and neither do I who has a lot of experience with Linux.

    I ended up just using Mint until Debian 13 released on that PC because my only option would be to switch to the unstable or testing branch, or compile my own kernel, which neither I cared to do on a PC I just wanted to work and play Horizon or whatever on.


  • Trying to think of classical models to explain the EPR paradox kinda misses the point of the EPR paradox, because the point of the EPR paradox is to assume that there is indeed nothing linking the two particles until you look to then show you that this leads to a contradiction with Einstein’s definition of locality.† You can indeed trivially think of classical explanations to explain the EPR paradox and how the +1 and -1 particles might be linked and predetermined, but that’s not the point of the EPR paper which is to explore what happens if we don’t make this assumption.

    The paper that instead explores what happens if we do assume they are predetermined is Bell’s theorem, and Bell’s theorem is more complicated than just assuming that the particles are entangled and opposites such that one will be measured to be +1 and the other to be -1. Bell’s theorem shows that the behavior of the individual particle can be dependent upon the configuration of a collection of measurement devices, even if the particle only ever interacts with one measurement device in the collection. That not only violates Einstein’s definition of locality, but if you try to make it deterministic, it ends up violating special relativity as well.

    The simplest demonstration of this is with three particles in the GHZ experiment. The point is, again, not merely that the particles have correlated values but that (1) those values are statistically dependent upon the configuration of the measurement device and (2) the values for an individual particle can be statistically dependent upon the configuration of a collection of measurement devices even if it never interacts with most of the devices in the collection.

    † “Locality” is used in two different senses in the literature. One is relativistic locality which means nothing can travel faster than light. The other is what I like to call coordinate locality which is what Einstein had in mind with the EPR paper, which is the idea that things have to locally interact to become dependent upon one another. The EPR paper is a proof by contradiction that quantum mechanics without hidden variables violates coordinate locality specifically.