Introduction
Welcome to Chapter 16! So far, we’ve explored the fascinating world of vector search, diving deep into USearch and its powerful integration with ScyllaDB. We’ve learned how to store, index, and query high-dimensional vectors, enabling intelligent applications like recommendation engines and semantic search. But what happens when things don’t go as planned? How do you ensure your vector search system is performing optimally, and what do you do when it’s not?
This chapter is all about becoming a detective for your vector search infrastructure. We’ll equip you with the knowledge and tools to effectively monitor the health, performance, and reliability of your USearch-powered ScyllaDB vector search. We’ll cover key metrics to track, introduce industry-standard observability tools like Prometheus and Grafana, and guide you through common debugging scenarios. By the end of this chapter, you’ll be able to proactively identify issues, troubleshoot performance bottlenecks, and ensure your vector search remains robust and responsive in production.
To get the most out of this chapter, a basic understanding of USearch and ScyllaDB concepts from previous chapters is beneficial. Familiarity with Docker will also be helpful for setting up our monitoring tools.
Core Concepts: Why and What to Monitor
Before we dive into tools, let’s understand why monitoring is so critical for vector search and what specific aspects we need to keep an eye on.
Why Monitor Vector Search?
Imagine your vector search system is the brain of your AI application. If the brain isn’t functioning well, the whole application suffers! Monitoring provides the vital signs of this “brain,” allowing you to:
- Ensure Performance: Vector search is often latency-sensitive. Monitoring helps you track query times, throughput, and indexing speeds to ensure your users get fast, accurate results.
- Maintain Reliability: Detect errors, crashes, or resource exhaustion before they lead to outages or data loss. Proactive alerts can save your day (and your sleep!).
- Optimize Resource Utilization: ScyllaDB and USearch can be resource-intensive. Monitoring helps you understand CPU, memory, and disk usage, allowing you to scale efficiently and avoid unnecessary costs.
- Validate Data Quality: While not directly a runtime metric, understanding recall and precision (often measured offline) is crucial. Monitoring related metrics like vector indexing rates can hint at data pipeline issues impacting quality.
- Capacity Planning: By observing trends in usage and performance, you can make informed decisions about scaling your infrastructure before you hit limits.
Key Metrics to Monitor
When monitoring a vector search system, we typically look at metrics across three layers: the database (ScyllaDB), the vector search engine (USearch, often integrated into ScyllaDB), and the application layer.
ScyllaDB Vector Search Specific Metrics
ScyllaDB’s integrated vector search provides excellent insights. Here are some critical metrics:
- Vector Indexing Rate: How many vectors are being inserted or updated in the vector column per second? A sudden drop might indicate an issue with your data ingestion pipeline.
- Vector Search Latency: This is paramount! We typically look at percentiles like P99 (99th percentile) and P95 (95th percentile) to understand worst-case user experience, as well as the average. High latency means slow search results.
- Vector Search Throughput: How many vector similarity queries are processed per second? This indicates the system’s capacity.
- Vector Index Size: The number of vectors stored and the memory footprint of the index itself. This helps with capacity planning and understanding memory pressure.
- ScyllaDB Node Health: Standard database metrics are still crucial:
- CPU Utilization: Is your ScyllaDB node becoming a bottleneck?
- Memory Usage: Is ScyllaDB using too much memory, potentially leading to swapping or OOM (Out Of Memory) errors?
- Disk I/O: For persistence and compaction.
- Network I/O: For inter-node communication and client requests.
- Compaction Metrics: ScyllaDB’s internal process for optimizing data storage. High compaction can indicate write amplification or disk pressure.
USearch Metrics (Conceptual, if USearch is external or for deeper dives)
While USearch is integrated into ScyllaDB, if you were using it as a standalone component or wanted to understand its internal workings more deeply, you might track:
- Index Build Time: How long does it take to build or rebuild a USearch index?
- Search Query Latency (internal): The time USearch itself takes to find similar vectors.
- Index Memory Footprint: The exact memory consumed by the USearch index structure.
- Recall/Precision: These are quality metrics, often measured offline through evaluation datasets, but they dictate the effectiveness of your vector search. If they drop, it’s a critical issue.
Application-Level Metrics
Finally, your application that uses vector search also needs monitoring:
- End-to-End Query Latency: From when a user clicks “search” to when results appear. This includes embedding generation, network latency, and ScyllaDB search time.
- Embedding Generation Latency: How long does it take your embedding model (e.g., a transformer model) to convert input text/images into vectors?
- Error Rates: How often do vector search queries or embedding generation fail?
- Cache Hit Ratios: If you’re caching vector search results, how effective is your cache?
Observability Stack: Tools of the Trade
To collect, store, visualize, and alert on these metrics, we use an “observability stack.” The most common and powerful combination for metrics is Prometheus and Grafana.
Figure 16.1: A typical Observability Stack for Vector Search
- Prometheus: A powerful open-source monitoring system that collects metrics from configured targets at given intervals, evaluates rule expressions, displays results, and can trigger alerts. It’s a time-series database optimized for storing metrics.
- Grafana: An open-source platform for monitoring and observability. It allows you to create beautiful, customizable dashboards from various data sources, including Prometheus. It’s fantastic for visualizing trends and setting up alerts.
- Logging (e.g., Loki, ELK Stack): While not our primary focus for metrics, logs are crucial for debugging. They provide detailed event streams from your application and database, helping you pinpoint the exact cause of an issue that metrics might only hint at.
- Tracing (e.g., OpenTelemetry): For complex distributed systems, tracing helps you follow a single request as it propagates through multiple services, identifying latency hotspots across your entire application stack.
In this chapter, we’ll focus on Prometheus and Grafana for metrics, as they are fundamental for vector search monitoring with ScyllaDB.
Step-by-Step Implementation: Setting up Basic Monitoring
ScyllaDB has excellent built-in integration with Prometheus. This makes setting up monitoring relatively straightforward. We’ll use Docker for simplicity to get all components running.
Step 1: Running ScyllaDB with Monitoring Enabled
ScyllaDB exposes its metrics via a Prometheus-compatible endpoint. When running ScyllaDB in a production environment, you’d typically manage this with your orchestration system (Kubernetes, etc.). For our learning purposes, Docker is perfect.
First, let’s start a ScyllaDB instance. We’ll map the Prometheus port (9180) so our Prometheus server can access it.
# Start a ScyllaDB container
docker run --name scylla-node1 -d \
-p 9042:9042 \
-p 9180:9180 \
scylladb/scylladb:6.0.0
Code 16.1: Starting ScyllaDB with Prometheus port exposed
Explanation:
docker run --name scylla-node1 -d: Starts a Docker container namedscylla-node1in detached mode.-p 9042:9042: Maps the standard Cassandra/ScyllaDB CQL port.-p 9180:9180: Maps ScyllaDB’s Prometheus metrics endpoint port. This is crucial for Prometheus to scrape metrics.scylladb/scylladb:6.0.0: We’re using ScyllaDB version 6.0.0, which is a recent stable release as of 2026-02-17. Always check the official ScyllaDB Docker Hub for the absolute latest stable tag.
Give it a minute or two to start up. You can verify ScyllaDB’s metrics endpoint by navigating to http://localhost:9180/metrics in your browser. You should see a long list of metrics in a Prometheus text format.
Step 2: Install and Configure Prometheus
Now, let’s set up Prometheus to scrape metrics from our ScyllaDB instance.
Create a Prometheus Configuration File: We need a
prometheus.ymlfile to tell Prometheus what to monitor. Create a new directory, saymonitoring_stack, and inside it, createprometheus.yml:# monitoring_stack/prometheus.yml global: scrape_interval: 15s # How frequently Prometheus scrapes targets scrape_configs: - job_name: 'scylladb' # Point to the ScyllaDB container's Prometheus endpoint static_configs: - targets: ['scylla-node1:9180'] # If running outside Docker, use 'localhost:9180' # If running in Docker-compose, use the service name:portCode 16.2: Basic Prometheus configuration for ScyllaDB
Explanation:
global.scrape_interval: Prometheus will attempt to pull metrics from configured targets every 15 seconds.scrape_configs: Defines the targets Prometheus should scrape.job_name: 'scylladb': A label for this set of targets.static_configs.targets: ['scylla-node1:9180']: This tells Prometheus to look for a service namedscylla-node1on port9180. This works because Docker containers can resolve each other by name within the same Docker network. If you were running Prometheus directly on your host and ScyllaDB in Docker, you’d uselocalhost:9180.
Run Prometheus: Now, let’s run Prometheus as a Docker container, mounting our configuration file.
# From the 'monitoring_stack' directory docker run --name prometheus -d \ -p 9090:9090 \ -v "$(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml" \ --link scylla-node1 \ prom/prometheus:v2.49.1Code 16.3: Starting Prometheus container
Explanation:
-p 9090:9090: Maps Prometheus’s web UI port.-v "$(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml": Mounts ourprometheus.ymlfile into the container at the expected path.$(pwd)ensures the current directory is used.--link scylla-node1: This creates a network alias forscylla-node1in the Prometheus container, allowing Prometheus to resolvescylla-node1by name. This is an older Docker feature,docker composeis preferred for more complex setups.prom/prometheus:v2.49.1: We’re using Prometheus version 2.49.1, a stable release as of 2026-02-17. Check Prometheus Docker Hub for the latest.
Verify Prometheus is running and scraping by opening
http://localhost:9090in your browser. Go to “Status” -> “Targets”. You should seescylladblisted with a “UP” status.
Step 3: Install and Configure Grafana
Grafana will provide the beautiful dashboards for our metrics.
Run Grafana:
# From the 'monitoring_stack' directory docker run --name grafana -d \ -p 3000:3000 \ --link prometheus \ grafana/grafana:10.3.3Code 16.4: Starting Grafana container
Explanation:
-p 3000:3000: Maps Grafana’s web UI port.--link prometheus: Links Grafana to the Prometheus container, allowing it to resolveprometheusby name.grafana/grafana:10.3.3: Using Grafana version 10.3.3, a stable release as of 2026-02-17. Check Grafana Docker Hub for the latest.
Access Grafana at
http://localhost:3000. The default login isadmin/admin. You’ll be prompted to change the password.Add Prometheus as a Data Source:
- In Grafana, navigate to “Connections” (or the gear icon in older versions) -> “Data sources”.
- Click “Add data source” and select “Prometheus”.
- For “Name”, enter
Prometheus. - For “URL”, enter
http://prometheus:9090. (This works because of the--link prometheuswe used). - Click “Save & test”. You should see “Data source is working.”
Import ScyllaDB’s Official Grafana Dashboards: ScyllaDB provides excellent, pre-built Grafana dashboards. This is a huge time-saver!
- In Grafana, go to “Dashboards” -> “Import”.
- You can import by ID or URL. ScyllaDB often publishes their dashboards on Grafana Labs. Search for “ScyllaDB”.
- A common dashboard ID is
12195(ScyllaDB Overview). Enter this ID. - Select your “Prometheus” data source when prompted.
- Click “Import”.
Congratulations! You now have a comprehensive ScyllaDB monitoring dashboard. Explore it to see metrics like CPU, memory, I/O, and even vector search specific metrics once you start inserting data.
Step 4: Creating a Custom Vector Search Dashboard (Example)
Let’s create a very simple custom panel to visualize a key vector search metric. We’ll need some data first.
Insert some vector data into ScyllaDB: Connect to your ScyllaDB instance using
cqlsh(you might need to install it or run it in a separate Docker container linked toscylla-node1).# Connect to ScyllaDB (if cqlsh is installed locally) cqlsh localhost 9042 # Or run cqlsh in a temporary container: docker run -it --link scylla-node1 scylladb/scylladb:6.0.0 cqlsh scylla-node1 9042Inside
cqlsh, execute the following:CREATE KEYSPACE IF NOT EXISTS vector_demo WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; USE vector_demo; CREATE TABLE IF NOT EXISTS items ( id UUID PRIMARY KEY, name TEXT, description TEXT, embedding VECTOR<FLOAT, 3> ); -- Create a vector index CREATE CUSTOM INDEX ON items (embedding) USING 'org.apache.cassandra.index.sai.StorageAttachedIndex' WITH OPTIONS = { 'target_column_size_bytes': '128MiB', 'ann': '{ "metric": "euclidean", "quantization": {"flat": {}} }' }; INSERT INTO items (id, name, description, embedding) VALUES (uuid(), 'apple', 'A red fruit', [0.1, 0.2, 0.3]); INSERT INTO items (id, name, description, embedding) VALUES (uuid(), 'banana', 'A yellow fruit', [0.2, 0.3, 0.4]); INSERT INTO items (id, name, description, embedding) VALUES (uuid(), 'orange', 'An orange fruit', [0.3, 0.4, 0.5]); INSERT INTO items (id, name, description, embedding) VALUES (uuid(), 'kiwi', 'A green fruit', [0.8, 0.7, 0.9]);Code 16.5: Creating a table with vector column and inserting data
Now, let’s perform some vector searches to generate metrics:
SELECT name, description, embedding FROM items ORDER BY embedding ANN OF [0.15, 0.25, 0.35] LIMIT 2; SELECT name, description, embedding FROM items ORDER BY embedding ANN OF [0.7, 0.8, 0.9] LIMIT 1;Code 16.6: Performing vector search queries
Exit
cqlshby typingexit;.Create a Grafana Panel:
Go back to your Grafana browser tab.
Navigate to “Dashboards” and click “New Dashboard”.
Click “Add visualization”.
Select your “Prometheus” data source.
In the “Query” tab, enter the following PromQL query:
rate(scylla_vector_search_queries_total{job="scylladb"}[1m])Code 16.7: PromQL query for vector search queries per second
Explanation:
scylla_vector_search_queries_total: This is a counter metric provided by ScyllaDB, incrementing each time a vector search query is performed.{job="scylladb"}: Filters the metric to only include those from ourscylladbjob in Prometheus.rate(...[1m]): This Prometheus function calculates the per-second rate of increase of the counter over the last 1 minute. This effectively gives us “queries per second.”
Set the “Visualization” type to “Graph”.
Give your panel a title like “Vector Search Queries per Second”.
Click “Apply” to see your new panel. You should see spikes corresponding to your
ANN OFqueries.Save your dashboard.
This simple example demonstrates how you can start building custom dashboards tailored to your specific vector search needs. You can explore other ScyllaDB vector search metrics like scylla_vector_search_latency_microseconds_bucket (for latency percentiles) to create more advanced visualizations.
Mini-Challenge: Extend Monitoring with a Custom Application Metric
Now it’s your turn to get hands-on!
Challenge:
- Create a simple Python Flask application that exposes a custom Prometheus metric at
/metrics. This metric should simulate ausearch_indexing_time_secondscounter that increments periodically. - Modify your
prometheus.ymlto scrape this new application endpoint. - Add a new panel to your Grafana dashboard to visualize the
rate()of yourusearch_indexing_time_secondsmetric.
Hint:
- For the Python app, you’ll need the
prometheus_clientlibrary (pip install prometheus_client Flask). - The
prometheus_clientprovidesCounter,Gauge,Histogram, andSummarytypes. ACounteris suitable here. - Remember to link your new Python app container to Prometheus or use
localhostif running it directly on your host.
What to Observe/Learn:
- How to expose custom application metrics for Prometheus.
- How to extend your Prometheus configuration to scrape multiple jobs/targets.
- How to integrate custom metrics into Grafana dashboards.
# monitoring_stack/app/app.py
from flask import Flask, Response
from prometheus_client import Counter, generate_latest
import time
import random
import threading
app = Flask(__name__)
# Create a Prometheus Counter metric
# This counter will track the number of simulated index builds
usearch_index_builds_total = Counter(
'usearch_index_builds_total',
'Total number of USearch index builds simulated'
)
# Function to simulate index builds and increment the counter
def simulate_indexing():
while True:
# Simulate an index build every 5-15 seconds
time.sleep(random.uniform(5, 15))
usearch_index_builds_total.inc() # Increment the counter
print(f"Simulated USearch index build. Total: {usearch_index_builds_total._value}")
# Start the simulation in a background thread
indexing_thread = threading.Thread(target=simulate_indexing, daemon=True)
indexing_thread.start()
@app.route('/metrics')
def metrics():
return Response(generate_latest(), mimetype='text/plain')
@app.route('/')
def hello():
return "Hello from the Vector Search App! Metrics available at /metrics"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Code 16.8: Python Flask app exposing a custom Prometheus metric
Solution Steps:
Save the Python code: Save the code above as
app/app.pyinside yourmonitoring_stackdirectory.Create a
requirements.txt: Inmonitoring_stack/app/, createrequirements.txt:Flask prometheus_clientCreate a
Dockerfilefor the app: Inmonitoring_stack/app/, createDockerfile:FROM python:3.10-slim-buster WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY app.py . EXPOSE 5000 CMD ["python", "app.py"]Build and run the app container:
# From the 'monitoring_stack' directory docker build -t vector-app ./app docker run --name vector-app -d \ -p 5000:5000 \ vector-appVerify by visiting
http://localhost:5000/metrics.Update
prometheus.yml: Add a new job for your app.# monitoring_stack/prometheus.yml (updated) global: scrape_interval: 15s scrape_configs: - job_name: 'scylladb' static_configs: - targets: ['scylla-node1:9180'] - job_name: 'vector_app' # New job for our custom app static_configs: - targets: ['vector-app:5000'] # Point to the app containerCode 16.9: Updated Prometheus configuration
Restart Prometheus:
docker stop prometheus docker rm prometheus docker run --name prometheus -d \ -p 9090:9090 \ -v "$(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml" \ --link scylla-node1 \ --link vector-app \ prom/prometheus:v2.49.1Note: We added
--link vector-appto allow Prometheus to resolve the app container.Add a Grafana Panel:
- In Grafana, go to your dashboard, click “Add visualization”.
- Use the query:
rate(usearch_index_builds_total{job="vector_app"}[1m]) - Set visualization to “Graph”, title it “Simulated USearch Index Builds/Sec”.
- Observe the fluctuating rate as your app simulates index builds.
This challenge helps you understand the full cycle of creating custom metrics, exposing them, scraping them with Prometheus, and visualizing them in Grafana.
Common Pitfalls & Troubleshooting
Even with robust monitoring, issues can arise. Knowing common pitfalls and how to approach debugging is crucial.
1. High Latency or Low Recall in Vector Search
This is perhaps the most critical issue. Users expect fast and accurate results.
Pitfall 1: Suboptimal ScyllaDB Vector Index Parameters.
- Description: The
annoptions inCREATE CUSTOM INDEX(e.g.,metric,quantization,connectivity) directly impact performance and quality. Incorrect choices can lead to slow queries or poor search results (low recall). - Debugging:
- Monitor
scylla_vector_search_latency_microseconds_bucketandscylla_vector_search_queries_total: Look for consistent high latency. - Review your
CREATE CUSTOM INDEXstatement: Are themetricandquantizationtypes appropriate for your embedding space? For example,euclideanfor distance,cosinefor similarity.quantization: {"flat": {}}is simple but can be slow for very large datasets; considerquantization: {"product_quantization": {"subvectors": 8}}for better performance at the cost of some accuracy. - Adjust
num_neighborsinANN OFqueries: A highernum_neighborswill increase accuracy (recall) but also latency. Experiment to find the right balance. - Offline Evaluation: Regularly evaluate your vector search quality (recall, precision) using a ground truth dataset. If recall drops, it might indicate an issue with your index or embeddings.
- Monitor
- Description: The
Pitfall 2: Embedding Quality Degradation.
- Description: The quality of your vector embeddings directly determines the relevance of search results. If your embedding model starts producing poor embeddings, your search will suffer.
- Debugging:
- Monitor Embedding Service Latency/Errors: If your service generating embeddings (e.g., a Python microservice using Hugging Face transformers) is slow or erroring, it impacts the entire chain.
- Sample Embeddings: Periodically sample new embeddings and visualize them (e.g., with t-SNE or UMAP) to check for unexpected clusters or drifts.
- Application Logs: Look for errors in your embedding generation service.
2. Resource Exhaustion (CPU, Memory, Disk)
ScyllaDB and USearch can be resource-hungry, especially with large datasets.
Pitfall 1: Insufficient ScyllaDB Node Resources.
- Description: Too many vectors, high query load, or inefficient indexing can overwhelm ScyllaDB nodes, leading to high CPU usage, OOM errors, or slow disk I/O.
- Debugging:
- Grafana Dashboards: Use the official ScyllaDB dashboards (or your custom ones) to monitor:
scylla_cpu_usage_total_percent: Consistently high CPU indicates a bottleneck.scylla_memory_total_bytes,scylla_memory_used_bytes: High memory usage, especially if it leads to swapping, is problematic.scylla_io_queue_length,scylla_disk_read_bytes_total: High I/O metrics can point to disk bottlenecks.
- ScyllaDB Logs (
/var/log/scylla/system.log): Look for warnings or errors related to memory pressure, compaction stalls, or I/O issues. - Scaling: If resources are consistently maxed out, consider adding more ScyllaDB nodes, upgrading existing nodes (more CPU/memory), or optimizing your data model/queries.
- Grafana Dashboards: Use the official ScyllaDB dashboards (or your custom ones) to monitor:
Pitfall 2: USearch Index Memory Footprint.
- Description: USearch indexes reside in memory for fast access. If your dataset grows too large, the index might consume excessive memory, leading to OOM errors or impacting other ScyllaDB operations.
- Debugging:
- Monitor
scylla_vector_index_memory_used_bytes(conceptual, check ScyllaDB specific metrics if available): Keep an eye on the memory consumed by the vector index. - Review
quantizationstrategy:flatquantization uses more memory per vector thanproduct_quantization. For very large datasets, Product Quantization (PQ) can drastically reduce memory footprint at the cost of some accuracy. - Data Partitioning: For truly massive datasets, you might need to partition your vector data across multiple ScyllaDB tables or even clusters, each with its own vector index.
- Monitor
3. Indexing Failures
Data isn’t getting into the vector index as expected.
- Pitfall: Invalid Vector Data or Schema Mismatches.
- Description: Attempting to insert vectors with incorrect dimensions, non-numeric values, or into a table without a properly defined
VECTOR<FLOAT, N>column orCUSTOM INDEXcan lead to write failures. - Debugging:
- Application Logs: Your application performing the
INSERToperations will likely log errors if ScyllaDB rejects the write due to schema violations or invalid data. - ScyllaDB Logs: ScyllaDB’s
system.logwill contain detailed error messages if it fails to process an insert into the vector column or create/maintain the index. Look for messages related to “invalid vector,” “dimension mismatch,” or “SAI index error.” - Verify Schema: Double-check your
CREATE TABLEandCREATE CUSTOM INDEXstatements against your application’s vector generation logic. Ensure theNinVECTOR<FLOAT, N>matches the dimension of your embeddings.
- Application Logs: Your application performing the
- Description: Attempting to insert vectors with incorrect dimensions, non-numeric values, or into a table without a properly defined
By understanding these common pitfalls and leveraging your monitoring tools, you’ll be well-equipped to keep your USearch-powered ScyllaDB vector search running smoothly.
Summary
Phew, we’ve covered a lot in this chapter! Monitoring and debugging are often overlooked but are absolutely essential for any production-ready system, especially complex ones like vector search.
Here are the key takeaways:
- Monitoring is non-negotiable: It ensures performance, reliability, resource optimization, and data quality for your vector search applications.
- Key metrics span layers: Focus on ScyllaDB’s vector search metrics (latency, throughput, indexing rate), core database health (CPU, memory, I/O), and application-level performance.
- Prometheus and Grafana are your best friends: Prometheus collects and stores metrics, while Grafana visualizes them through powerful, customizable dashboards.
- ScyllaDB offers fantastic integration: Its built-in Prometheus endpoint and official Grafana dashboards make getting started with monitoring straightforward.
- Custom metrics extend visibility: You learned how to expose application-specific metrics using libraries like
prometheus_clientin Python. - Debugging is a systematic process: When issues arise, combine insights from metrics (Grafana), detailed events (logs), and your understanding of vector search principles to diagnose and resolve problems effectively.
- Common pitfalls include: Suboptimal index parameters, poor embedding quality, resource exhaustion, and data insertion errors.
You now have the foundational knowledge to not just build, but also to maintain and troubleshoot highly performant and reliable vector search systems with USearch and ScyllaDB.
References
- ScyllaDB Vector Search Documentation
- ScyllaDB Monitoring Stack Documentation
- Prometheus Official Documentation
- Grafana Official Documentation
- Grafana Labs ScyllaDB Dashboards
- USearch GitHub Repository
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.