1 · Random numbers & Seeds



2 · Predictor Statistics


    3 · Support Vectors & Decision Margins


    4 · Kernels on Toy Data

    Compare different kernels on simple synthetic data types.


    5 · Feature Expansion

    See how low-dimensional features are expanded to richer representations.


    6 · Feature Expansion

    Original 2D Space
    Expanded 3D Space

    7 · Quantum-Inspired Kernel

    What is a kernel?
    A kernel is a mathematical tool that implicitly maps data to a higher-dimensional space where it becomes linearly separable. It lets us compare points as if we had done this mapping — without actually doing it.

    What is the support vector kernel?
    In support vector machines (SVM), only a few points — called support vectors — define the optimal separating boundary. You saw these earlier as the circled points in each class. The kernel computes how similar a new point is to each of these support vectors, and uses this information to determine which class the new point belongs to.

    How does prediction work?
    During inference, the algorithm measures how close the new point is to the support vectors (via the kernel), and classifies it based on which side of the boundary it falls on — essentially, which group of support vectors it's closest to.

    What’s special about the quantum-inspired kernel?
    In quantum computing, input data is projected into a very high (often infinite) dimensional space known as a Hilbert space. This makes it easier to separate complex patterns — if a separation exists. While we don’t have access to quantum hardware here, we can still borrow this idea.

    Instead of infinite dimensions, we project our data into a 12-dimensional space using transformations inspired by quantum circuits — like applying sine, cosine, and their combinations to input features. This creates what we call a Hilbert-style expansion.

    Once in this space, we measure similarity using a dot product, which behaves like the fidelity (or overlap) between quantum state vectors. This gives us a kernel matrix — a table that tells us how similar each pair of points is. During inference, a new point is compared to the support vectors using the same similarity measure to assign a class.


    8 · Ramachandran Map

    Visualize allowed vs disallowed regions in φ/ψ space.


    9 · Quantum-Inspired SVM on Balanced φ/ψ

    Evaluate training in φ/ψ space.


    10 · Summary

    • 1 · Reproducible Randomness with Seeds: Demonstrated how setting random seeds ensures consistent and reproducible dataset generation.
    • 2 · Predictor Statistics: Explored accuracy, precision, recall, F1 score, and MCC as essential metrics for evaluating classifiers.
    • 3 · Support Vectors & Decision Margins: Illustrated how SVMs find optimal separating hyperplanes and the role of support vectors in defining decision boundaries.
    • 4 · Kernels on Toy Data: Compared linear and non-linear kernels across separable and non-separable datasets using interactive decision boundary plots.
    • 5 · Feature Expansion: Visualized how data that is inseparable in 2D becomes separable in 3D through kernel-induced feature expansion.
    • 6 · Visualizing Feature Expansion: Uses a visual example of separation is achieved when mapping from low to high dimensions.
    • 7 · Quantum-Inspired Kernel: Described the theoretical foundations and formulation of a quantum-inspired kernel based on inner product fidelity (text only).
    • 8 · Ramachandran Map: Mapped DSSP-derived φ/ψ angles to structural classes and prepared real-world data for classification tasks.
    • 9 · Quantum-Inspired SVM on Balanced φ/ψ: Showcased the effect of dataset size, class balance, and stratification on SVM performance using real Ramachandran data.