# README
RHDH Model Catalog Bridge
This repository provides various containers that faciiltate theseamless export of AI model records from various AI Model Registres and imports them into Red Hat Developer Hub (Backstage) as catalog entities.
Current status: early POC stage.
- We have some docker files in place for the container imaages, but more work is needed there
- This repository collaborates with Backstage catalog entensions currently hosted in our fork of the RHDH plugins repository.
- Until those plugins have assoicated images and can be added to OCP RHDH, we have to run those plugins, and by extension backstage, from our laptops.
- By extension, the
rhoai-normalizer
andstorage-reset
containers have to run on one's laptop as well. Thelocation
container can run as an OCP deployment, but it is just as easy to run it out of your laptop as well. - This simple Gitops repo has the means of setting up Open Data Hub plus dev patches for Kubelow Model Registry that facilitate getting the URLs for running Models deployed into RHOAI/ODH by the Model Registry.
Contributing
All contributions are welcome. The Apache 2 license is used and does not require any contributor agreement to submit patches. That said, the preference at this time for issue tracking is not GitHub issues in this repository.
Rather, visit the team's RHDHPAI Jira project and the 'model-registry-bridge' component.
Prerequisites
- An OpenShift cluster with 3x worker nodes, with at least 8 CPU, and 32GB memory each.
- on AWS
m6i.2xlarge
org5.2xlarge
(if GPUs are needed) work well - For other options, see https://aws.amazon.com/ec2/instance-types/
- on AWS
Usage
Either via the command line, or from your favorite Golang editor, set the following environment variables as follows
rhoai-normalizer
K8S_TOKEN
- the login/bearer token of yourkubeadmin
user for the OCP cluster you are testing onKUBECONFIG
- the path to the local kubeconfig file corresponding to your OCP clusterMR_ROUTE
- the name of the Model Registry route in theistio-system
namespace. For now, useodh-model-registries-modelregistry-public-rest
.NAMESPACE
- the name of the namespace you create for deploying AI models from ODHSTORAGE_URL
- for now, just usehttp://localhost:7070
; this will be updated when we can run this container in OCP as part of the RHDH plugin running in RHDH
storage-rest
RHDH_TOKEN
- the static token you create in backstage to allows for authenticated access to the Backstage catalog API. See (https://github.com/redhat-ai-dev/rhdh-plugins/blob/main/workspaces/rhdh-ai/app-config.yaml#L19)[https://github.com/redhat-ai-dev/rhdh-plugins/blob/main/workspaces/rhdh-ai/app-config.yaml#L19]BKSTG_URL
- for now, just usehttp://localhost:7007
; this will be updated when we can run this container in OCP as part of the RHDH plugin running in RHDHBRIDGE_URL
- for now, just usehttp://locahost:9090
; this is the REST endpoint of ourlocation
containerSTORAGE_TYPE
- for now, only the development modeConfigMap
is supported; we'll addGitHub
soonK8S_TOKEN
,KUBECONFIG
, andNAMESPACE
are the same as above
location
- None of the above env vars are needed at this time.