>>5893>>5894 is a pretty good explanation tbh. it's also why right now google, ibm, et al. are measuring their successes in building a quantum computer by its quantum volume, a number that takes into account both amount of qubits and its effective error rate, because every qubit that is added and can operate relatively error free theoretically increases the capability to operate on data exponentially by allowing larger data structures. then all you need is an actual algorithm that can take advantage of this capability in a way that makes it more efficient than classical computers.
as
>>5894 said pathfinding is one example of a problem that might benefit from this, so in complexity terms if we take A* (which worst case can get polynomial cost in bounded tree) and compare it to say a quantum binary tree search then, assuming there are enough qubits with a negligible error rate, you would get at least an exponential speed up.
pathfinding of course is fairly unique here because its memory bounded first and foremost, and every attempt at a quantum pathfinding algorithm i know of currently uses a qubit per pathing decision for example, so on large enough paths this really does exhaust capabilities of current quantum computers if you take into account that the "best" quantum computers (google, ibm) at the moment have around 50 qubits. ibm is "planning for a 1000 qubit computer by 2023" but right now this is just big words.
but to generalise it: if you are working on an algorithm that requires finding the correct state in a large state space, then quantum will in general offer (usually) an exponential kind of speed up there. and this is before we get to unique solutions that use entanglement like
https://en.wikipedia.org/wiki/Shor%27s_algorithm