Compare different kernels on simple synthetic data types.
See how low-dimensional features are expanded to richer representations.
What is a kernel?
A kernel is a mathematical tool that implicitly maps data to a higher-dimensional space where it becomes linearly separable. It lets us compare points as if we had done this mapping — without actually doing it.
What is the support vector kernel?
In support vector machines (SVM), only a few points — called support vectors — define the optimal separating boundary. You saw these earlier as the circled points in each class. The kernel computes how similar a new point is to each of these support vectors, and uses this information to determine which class the new point belongs to.
How does prediction work?
During inference, the algorithm measures how close the new point is to the support vectors (via the kernel), and classifies it based on which side of the boundary it falls on — essentially, which group of support vectors it's closest to.
What’s special about the quantum-inspired kernel?
In quantum computing, input data is projected into a very high (often infinite) dimensional space known as a Hilbert space. This makes it easier to separate complex patterns — if a separation exists. While we don’t have access to quantum hardware here, we can still borrow this idea.
Instead of infinite dimensions, we project our data into a 12-dimensional space using transformations inspired by quantum circuits — like applying sine, cosine, and their combinations to input features. This creates what we call a Hilbert-style expansion.
Once in this space, we measure similarity using a dot product, which behaves like the fidelity (or overlap) between quantum state vectors. This gives us a kernel matrix — a table that tells us how similar each pair of points is. During inference, a new point is compared to the support vectors using the same similarity measure to assign a class.
Visualize allowed vs disallowed regions in φ/ψ space.
Evaluate training in φ/ψ space.