82 Matching Annotations
  1. Jul 2020
    1. Reservoir engineering even opens up the possibility of an unorthodox implementation of universal quantum computation [244], in which the interaction with the environment is not an adversary but rather a resource for quantum information processing.

      This would completely change the way people think about achieving quantum computation if true

    2. Mohseni et al. [232] experimentally demonstrated how the performance of a photonic implementation of the Deutsch–Jozsa quantum algorithm [233] can be substantially enhanced through the use of a DFS. DFSs have also been experimentally realized in quantum cryptography, for example, in the fault-tolerant quantum key distribution protocol proposed and implemented by Zhang et al.

      Shows that decoherence-free-subspaces have been applied to quantum computers with some success, depending on the meaning of "substantially enhanced"

    3. coherence times around 1 s.

      A single quantum algorithm is likely to run in less time than this, but for long-term sustain operation of a quantum computer this barely scratches the surface of necessity.

    4. . How much the system becomes entangled – and thus how strong the effect of decoherence is – dependson how its initial quantum state relates to the Hamiltonian that governs the interaction between system and environment.

      The Hamiltonian is an operator corresponding to the sum of all kinetic and potential energies of particles within a system, and in most cases results in the total energy of a system.

    5. The insight is that realistic quantum systems are never completely isolated from their environment, and that when a quantum system interacts with its environment, it will in general become rapidly and strongly entangled with a large number of environmental degrees of freedom.

      succinct definition of coherence

    6. Hilbert space

      Hilbert space - an abstract vector space that serves to extend methods of vector operations and calculus from the 2D Euclidean plane to a space containing anywhere from 3 to infinite dimensions.

    1. Itnowappearsthat,atleasttheoretically,quantumcom-putationmaybemuchfasterthanclassicalcomputationforsolvingcertainproblems[5—7],includingprimefactoriza-tion

      This prime factorization problem led to Shor's algorithm, developed by the author of this paper himself, that would allow us to break RSA encryption.

    2. Thus,foraprobabilityofdecoherencelessthan—,'„wehaveanimprovedstoragemethodforquantum-coherentstatesoflargenumbersofqubits.Sincepgenerallyincreaseswithstoragetimethewatchdogeffectcouldbeusedtostorequantuminformationoverlongperi-odsbyusingthedecoherencerestorationschemetofre-quentlyresetthequantumstate.

      The problem then becomes physically getting these large numbers of qubits to coexist.

    3. Itseemsthatwearegettingsomethingfornothing,inthatwearerestoringthestateofthesuperpositiontotheexactoriginalpredecoherencestate

      Not quite -- this scheme requires 8 additional qubits to function properly, which has proven to be quite the cost.

    4. Thereisacostforusingthisscheme.First,thenumberofqubitsisexpandedfromkto9k

      This is why there are a number of 54-qubit quantum computers being used at the moment - qubits in multiples of 9 allow Shor's scheme to work

    5. Thecriticalas-sumptionhereisthatdecoherenceonlyaffectsonequbitofoursuperposition,whiletheotherqubitsremainunchanged.Itisnotclearhowreasonablethisassumptionisphysically,

      This has proven to hold true

    6. Wemustalsoinitializethecomputerbyputtingitsmemoryinsomeknownstate.Thiscouldbedonebypostu-latingaseparateoperation,initialize,whichsetsaqubittoapredeterminedvalue.

      This has come to be known as embedding, as referenced by many of the other articles I have parsed through

    7. Theclassicalanalogofourproblemisthetransmissionofinformationoveranoisychannel;inthissituation,error-correctingcodescanbeappliedsoastorecoverwithhighprobabilitythetransmittedinformationevenaftercorruptionofsomepercentageofthetransmittedbits,wheretheper-centagedependsontheShannonentropyofthechannel.Wegiveaquantumanalogofthemosttrivialclassicalcodingscheme:therepetitioncode,whichprovidesredundancybyduplicatingeachbitseveraltimes[11].Thisencodingschememightbeusefulwhenstoringqubitsintheinternalmemoryofthequantumcomputer;sothatwhilequbitsareinstoragetheyavoid(oratleastundergoreduced)decoherence,leavingdecoherencetooccurmainlyinqubitsactivelyin-volvedinthecomputation

      This is a bit dated as the duplication of qubits has proven to be quite problematic given the difficulty of maintaining a large-scale system.

    1. Parity is therefore the error syndrome, and the information can be recovered if the number of parity jumps is faithfully measured. This requires parity measurements to be performed frequently relative to the single-photon loss rate.

      Logical errors due to spontaneous relaxation of ancillae will increase with the increased frequency of parity measurements

    2. This cost forces the designer of an error correction protocol to measure the error syndrome less frequently than would otherwise be desirable and consequently reduces the potential achievable lifetime gain.

      A nigh-insurmountable contradiction.

    3. We encode quantum information using the Schrödinger cat code

      I'm sorry for wasting an annotation on this, truly, but...the fact that it's called the Schrodinger cat code just makes me unreasonably giggly

    4. ancilla errors are allowed to accumulate in degrees of freedom that need not be monitored or corrected. Recently, this type of syndrome measurement was demonstrated in both trapped ions and superconducting qubits by using a four-qubit code that allows error detection but not error correction (8, 9).

      The "better-than-nothing" outlook

    5. Although this may prevent ancilla errors from spreading in the system, it comes at the cost of an increased hardware overhead.

      Increasing the already-detrimental hardware overhead is not scalable.

    6. Errors are typically detected by mapping properties of the system, known as error syndromes, onto an ancillary system, which is subsequently measured.

      Construction of said ancillary system becomes an extra time-sink, money-sink, etc

    7. Scalable quantum computation will require fault tolerance for every part of a logical circuit, including state preparation, gates, and measurements (2).

      As was the issue with classical computing for a long while

    1. Despite their differences, all mappers needs an internalrepresentation of key quantities and these can be combinedin the concept of theexecution snapshot. As the namesuggests, the execution snapshot is a complete descriptionof the algorithm and its current, usually partial, schedule. Itcontains:

      In terms of scalability, such "execution snapshots" seem to be at risk of piling up and hogging memory.

    2. These signals are generated by classical electron-ics such as Arbitrary Waveform Generators (AWGs) locatedat room temperature. Qubits could be operated independentlyby having a dedicated control device for each of them.This would allow, for instance, to perform in parallel anypossible combination of single-qubit gates as long as thedependency between the operations was respected. However,this dedicated control approach is not an scalable, feasibleand affordable (in terms of cost), specially for building large-scale quantum systems

      The issues regarding this approach seem to be "malleable" in the sense that as time goes on, funding increases, etc., it could be something to consider. It seems like a resource issue more than anything.

    3. A more scalablequantum processor with a surface code architecture waspresented in [11], [12], calledSurface-17. This quantum chiphas been built with the goal of demonstrating fault-tolerant(FT) computation in a large-scale quantum system based onsurface code [60], one of the most promising quantum errorcorrection (QEC) codes. However, it can also be consideredas a NISQ device and therefore be used for running quantumalgorithms that require up to 17 qubits

      This is the most intriguing solution to me so far.

    4. exact ap-proaches [30], [43], [49] are feasible when consideringrelatively small number of qubits and gates, givingminimal or close-to-minimal solutions. However, theyare not scalable

      The lack of scalability in exact solutions is troublesome

    5. as they are accessiblethrough the cloud.

      Meaning accessible only through "classical" computing constraints as a bottleneck, being that cloud-based quantum systems still must interface with end users via traditional computer systems

    6. The problem is simply stated: one needs to schedule atwo-qubit gate but the corresponding program qubits are cur-rently placed on non-connected physical qubits.

      Confirms the above

    7. ) express the operations interms of the gates native to the quantum processor, a taskcalled gate decomposition, 2) initialize and maintain the mapspecifying which physical qubit (qubit in the quantum device)is associated to each program qubit (qubit in the circuitdescription, sometimes called logical qubit in the literature),a task called placement of the qubits, and 3) schedule thetwo-qubit gates compatibly with the physical connectivity,often by introducing additional routing operations.

      These additional qubit routing operations potentially being the solution to the "next-neighbor" constraint posed earlier in the article.

    8. As in classical computers, quantum algorithms describedas programs using a high-level language have to be compiledinto a series of low-level instructions like assembly codeand, ultimately, machine code. As sketched in Figure 2, in aquantum computer these instructions need to be ultimatelytranslated into the pulses that operate on the qubits andperform the desired operation [19].

      The issue of mapping was the source of my confusion regarding high-level quantum languages. If we can forego the conversion to assembly, why have such languages at this point anyway? Granted, a classical computer scientist probably would've said the same thing in the 60s and 70s.

    9. Sequences of quantum operations nally dene quantumalgorithms which are usually described by high-level quan-tum languages (e.g. Scaffold [14] or Quipper [15]), quan-tum assembly languages (e.g. OpenQASM 2.0 developed byIBM [16] or cQASM [17]), or circuit diagrams.

      Interesting - I didn't know high level quantum languages (or at least higher than QASM) were fleshed-out to the point of usability at this point. Will have to look into those.

    10. Qubits can assume the well-known basis states|0〉and|1〉(here written using Dirac notation), but can also beput intosuperpositionof both. More precisely, the state ofa qubit (in other words, aquantum state) can be describedby|ψ〉=α0|0〉+α1|1〉, whereα0andα1are complexnumbers calledamplitudesand|α0|2+|α1|2has to be equalto1. Measuring a single qubit in the computational basis willresult in a binary value,0or1, collapsing the qubit to eitherof the two basis states|0〉and|1〉with probability|α0|2and|α1|2, respectively.

      A very clean and concise way to describe quantum bits to a classical computer scientist

    11. This is one of the main constraints of today’s quantumdevices and frequently requires the quantum informationstored in the qubits to be moved to other adjacent qubits– typically by means of SWAP operations.

      Each swap operation delays the completion of an algorithm additively.

    12. In these quantum processors, qubits are arranged in a2D topology with limited connectivity between them and inwhich only nearest-neighbor (NN) interactions are allowed.

      A massively restrictive constraint

    13. Quantum computing is currently moving froman academic idea to a practical reality. Quantum computingin the cloud is already available and allows users from all overthe world to develop and execute real quantum algorithms.However, companies which are heavily investing in this newtechnology such as Google, IBM, Rigetti, Intel, IonQ, andXanadu follow diverse technological approaches. This led toa situation where we have substantially different quantumcomputing devices available thus far. They mostly differ inthe number and kind of qubits and the connectivity betweenthem

      Thus the discussion as to the most scalable model of quantum computation begins. (Or at least it will when these corporations decide to collaborate on such things)

    1. Realistic hardware is subject to intrinsic noise that affects the quantum dynamics of the system, and therefore needs to be considered when evaluating the efficiency of quantum annealing hardware

      The beginning of a "snowball" effect that leads to the state of decoherence among qubits.

    2. Intrinsic noise cannot be eliminated from real quantum devices: manufacturing imperfections, as well as thermal fluctuations, induce quantum dephasing and decoherence (see Section 6)

      *as of right now

    3. In one direct QUBO formulation, a bit is associated to the execution of a given job in a given machine (out of M possible) at a given time (discretized in T slots), allowing for very efficient mappings on current quantum annealers supporting two-body Ising-type interactions, using NMT qubits, where N is the number of jobs. While objective functions of the priority maximization type are easily implementable as linear penalty functions requiring only local fields on the corresponding logical bits, objectives requiring makespan minimization require a more involved encoding with either T ancilla clock variables highly connected to the qubits relative to the jobs scheduled last, or by complementing the quantum solver with guidance from classical methods, such as binary search

      Highlights the fact that classical computing algorithms are still needed in conjunction with newly developed quantum algorithms to achieve applicable goals, at least in cases like these scheduling problems.

    4. .

      The route inspection problem:

      A generalization is to choose any set T of evenly many vertices that are to be joined by an edge set in the graph whose odd-degree vertices are precisely those of T. Such a set is called a T-join. This problem, the T-join problem, is also solvable in polynomial time by the same approach that solves the postman problem. (Wikipedia)

    5. 4.1. Quantum annealing for planning and scheduling

      Again including the route inspection problem, one of the first problems that it was decided could be solved on any scale by quantum computers

    6. The current D-Wave machine at NASA has 12 × 12 such units and a total of 1152 qubits, of which 1097 are working.

      Either I was wrong about the current scale of the higher-end quantum computers, or the D-Wave annealer loses such a vast chunk of its power from the process of embedding that it doesn't stack up to other models of quantum computers containing far less qubits.

    7. Unlike the mapping step, the embedding step is hardware dependent. A cluster of qubits {yi, k} connected to each other in the hardware graph will represent a single variable xi. For any term xixj in the mapped QUBO, there is a connection in the embeddable QUBO between one of the qubits in the cluster for xi and one qubit in the cluster for xj

      This is where the issues of decoherence begin to present themselves even in situations where most of the power of quantum computation is sacrificed for the sake of feasibility.

    8. two main steps in programming a quantum annealer: mapping the problems to QUBO; and embedding, which takes these hardware-independent QUBOs to other QUBOs that match the specific quantum annealing hardware that will be used.

      This is incredibly slow compared to the rate at which a "real" (yes, I'm aware it's a No-True-Scotsman fallacy) quantum computer can process similar problems, in theory

    9. A wide class of optimization problems of practical interest can be expressed in terms of cost functions that are polynomials over finite sets of binary variables.

      Such as the "mailman" problem, or "route inspection problem"

    10. empirical testing becomes possible only as quantum computation hardware is built.

      I like that they mention this, it ties in well with the idea of a quantum winter in which progress slows to the extent that funding dries up.

    1. Successful implementations of optical non-linearity enable photonic two-qubit gates [261 Nemoto, K. ; W. , J. Munro. Phys. Rev. Lett. 2004, 93 , 250502-1–250502-4. [Google Scholar]] and non-destructive Bell-state detection [262 Barrett, S.D. ; Kok, P. ; Nemoto, K. ; Beausoleil, R.G. ; Munro, W.J. ; Spiller, T.P. Phys. Rev. A 2005, 71 , 060302. [Crossref], [Web of Science ®], [Google Scholar]], and photon switching [263 Harris, S.E. ; Yamamoto, Y. Phys. Rev. Lett. 1998, 81 , 3611–3614. [Crossref], [Web of Science ®], [Google Scholar]–267 Tiecke, T.G. ; Thompson, J.D. N.P. de Leon, LR Liu, V Vuletić, and Mikhail D Lukin. Nature 2014, 508 , 241–244. [Crossref], [PubMed], [Web of Science ®], [Google Scholar]].

      Will have to further research what these specific implementations entail

    2. This removes the requirement for quantum memories to operate at telecom wavelengths, at the expense of increasing the number of repeater stations required per unit length

      Currently an expense that cannot be afforded, but depending on availability, efficiency, and reliability of future quantum repeaters, is a promising outlook

    3. is their excellent coherence properties at cryogenic temperatures. This is particularly important for application of quantum memories in long-distance quantum communications. In order to enable entanglement distribution beyond distances achievable by direct transmission in fibres (<img src="/na101/home/literatum/publisher/tandf/journals/content/tmop20/2016/tmop20.v063.i20/09500340.2016.1148212/20200417/images/tmop_a_1148212_ilm0023.gif" alt="" />>><math><mo>&gt;</mo></math>1000 km), storage times in excess of milliseconds will be required.

      Functionality at cryogenic temperatures is absolutely essential as all currently-conceived quantum computers must be kept at near absolute zero

    4. Optical <img src="/na101/home/literatum/publisher/tandf/journals/content/tmop20/2016/tmop20.v063.i20/09500340.2016.1148212/20200417/images/tmop_a_1148212_ilm0018.gif" alt="" />ππ<math><mi mathvariant="italic">π</mi></math>-pulses

      π pulse excites all particles in the first half time and de-excites in the second, so all particles are in lower level.

      Broken English, but the only explanation I could find

    5. ππ<math><mi mathvariant="italic">π</mi></math>-pulse

      I'm seeing a lot of talk about pi-pulses here and in the next few paragraphs, will pause here to read more about what that means

    6. it is interesting to note that most protocols have been implemented on at least two different platforms, and most platforms can support multiple protocols.

      This flexibility is key in terms of the long term scalability of quantum memory implementations

    7. interconvertmaterial

      interconvertability - the property of two things to be able to be freely converted into one another, reversibly.

      Had to Google that one

    8. and as such these research programmes represent the most advanced techniques for the quantum control of optical signals.

      Glad to know I'm not the only one confused here.

    9. Quantum sensors, quantum computers and quantum cryptography all have specific enhancements over their classical counterparts [1 Giovannetti, V. ; Lloyd, S. ; Maccone, L. Nat. Photonics 2011, 5 , 222–229. [Crossref], [Web of Science ®], [Google Scholar]–3 Gisin, N. ; Ribordy, G. ; Tittel, W. ; Zbinden, H. Rev. Mod. Phys. 2002, 74 , 145–195. [Crossref], [Web of Science ®], [Google Scholar]]

      In certain scenarios, this holds true. I feel as though the authors are describing here an idealistic version of a perfect quantum sensor/computer, which may or may not ever be possible to produce.

    10. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics.

      This is a big deal - the KLM and boson sampling models of quantum computation both rely on linear optics.

    1. The obtained results surely have a clear intuitive expla-nation, since the more the entropy of the reduced stateof the qubit is, the more it is entangled to remainingqubits of the system, and the more it is ‘informationallyimportant’ to the whole state.

      They reiterate the notes I made in my previous annotation.

    2. For all the channels we have seen that for givendecoherence strength the maximal fidelities in all theschemes except ‘both collective’ is expressed with thelinear entropy of the qubit under decoherenceSQlin. Inparticular, we have obtained that the larger is the linearentropy the lower is the fidelity

      Lower entropy equating to higher fidelity in transmitted data seems fairly intuitive to me, even without extensive technical knowledge on the matter.

    3. We have shown that twoschemes of this class, namely ‘individual-then-collective’and ‘collective-then-individual’ provide the same maxi-mal achievable levels of the fidelity.

      The issue then would become one of efficiency, speed, and ease of implementation.

    4. In our consider-ation, the main studied characteristic is the fidelity re-garding input and output states.

      An ideal consideration when trying to implement error correction measures but the feasibility of achieving such fidelity with currently available methods and/or components is an issue as well

    5. First, we use apre-processingprocedure forpreparing a given known quantum state of the systemin a specific form. Next, we use apost-processingopera-tion, which follows the action of a decoherence channel.These operations can be implemented as unitary oper-ators, and their particular form can be efficiently con-structed based on prior knowledge of the state under theprotection and decoherence channel, i.e. the method isstate-dependent.

      I feel this assumption of prior knowledge of the state limits the usefulness of these unitary operations in circumstances where the checked state does not match the required one for the method to run. Inserting methods for all possible states may or may not be feasible, I feel I lack critical context

    6. Figure 1. Error suppression scheme based on pre-processingand post-processing unitary operations, which are designedspecifically for the input state and decoherence channel inorder to maximize the fidelity of the output state

      Will use this to describe the basic "building blocks" of currently proposed solutions to the issues of decoherence

    7. we stress on possible methods for error sup-pression that would reduce, but not eliminate the ef-fect of decoherence in transferring quantum states

      The authors take a different approach to quantum error suppression than in the other works I've been reading

    8. the current challenge is toscale such devices with respect to the number of qubitsinside quantum computers and the distance of quantumstate transfer [6]. A major barrier is protecting quantumsystems from decoherence, which is the main source oferrors in quantum information processing devices

      This blurb reaffirms the entire premise of my research

    9. In contrast to quantum error correction and measurement-basedmethods, the suggested approach relies on specifically designed unitary operators for a particularstate without the need in ancillary qubits or post-selection procedures.

      Much like in Mitigation of Decoherence-Induced Quantum-Bit Errors and Quantum-Gate Errors Using Steane’s Code, May 2020(Rosie Cane, Daryus Chandra, Soon Xin Ng, Lajos Hanzo, for IEEE Access), the emphasis here is on unitary operations forgoing the need for ancillary qubits or post-processing operations that would siphon processing power from the task at hand.

    10. Decoherence is a fundamental obstacle to the implementation of large-scale and low-noise quantuminformation processing devices.

      I may try to find similar quotes in the other works I plan to reference to compile them all for the purpose of stressing (arguably) the most important issue at hand regarding the advancement of quantum information technology.

    1. Fault-tolerant QECCs are capable of encoding unknown states [1]. This is because the traditional unitary encoding circuits are not fault tolerant. Practical quantum circuits experience both gate-induced qubit errors with a probability of PgP_{g} as well as qubit errors imposed by the decoherence probability of PeP_{e} . We found that improved logical qubit reliability can be attained using non-fault tolerant QECC’s when PeP_{e} is an order of magnitude higher than PgP_{g} . However, this imposes a strict condition on our quantum channel model, where the channel parameters have to obey the specific conditions unveiled in this treatise. In our future work, we will design fault tolerant schemes for encoding unknown states in the face of realistic quantum impairments using bespoke QECCs. Another direction for this simulation is to consider circuits, which have more transversal gates. A single transversal gate can be implemented in each error correction step. Therefore, multiple transversal gates can be constructed by repeatedly implementing the scheme presented here in succession for a certain circuit depth. This can be tested by simulations for determining the effect of circuit depth on the gate error rate thresholds.

      Will use parts of this to explain some advancements being made in the mitigation of decoherence, citing the "TO DO" reference to the authors' future work to convey that research is still ongoing.

    2. A fault tolerant circuit construction mitigates both the gate error and proliferation error probability.

      Will use to explain some of the statements presented in the beginning of the paper

    3. The dynamics of the quantum world are described by unitary transformations, which preserve the dimensions of the system.

      Highlights a key, useful difference between classical computing and quantum computing

    4. he unit of quantum computing is the quantum bit (qubit). A qubit can reside in a superposition of the unit vectors |0⟩|0\rangle and |1⟩|1\rangle corresponding to the classical bit values 0 and 1 [3], [29], [30]. The information stored in a qubit is processed by quantum logic gates. These are introduced in the following section, starting with the most common two-qubit gate, namely the CNOT gate.

      Critical background information that I will likely cite in my writing to explain quantum computers at a base level.

    5. The seminal conception of fault tolerant QECC’s by Shor [7] combined with the threshold theorem of Aharonov and Ben-Or [12] provided a proof of concept that quantum computers may execute a quantum algorithm to a reasonable accuracy despite imperfect components.

      This was the first necessary step in determining that quantum computing was feasible in any capacity.

    6. Unfortunately, fault tolerant state preparation techniques impose a substantial qubit overhead, since the stabilizer must be repeated multiple times to guarantee that a single error-free outcome can be obtained.

      This cuts into the speed of the system

    7. since environmental perturbations may affect a group of components in each others vicinity.

      This gets straight to the point about the problems with decoherence, and the issue of scaling quantum computers. If a single perturbation can affect a group of components, the more qubits we attempt to fit into a system, the more "surface area", so to speak, there is for these perturbations to occur.

    8. A fault tolerant Quantum Error Correction Code (QECC) is by definition capable of avoiding the propagation of errors.

      Fault tolerance is one of the main issues with current quantum computing models.