Skip to content

Commit 3e796ba

Browse files
authored
doc: fix missing references to README.md (opea-project#860)
Signed-off-by: David B. Kinder <[email protected]>
1 parent 5ed7767 commit 3e796ba

File tree

21 files changed

+32
-32
lines changed

21 files changed

+32
-32
lines changed

Diff for: AgentQnA/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -103,4 +103,4 @@ curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: app
103103

104104
## How to register your own tools with agent
105105

106-
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain#5-customize-agent-strategy).
106+
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain/README.md#5-customize-agent-strategy).

Diff for: AudioQnA/benchmark/accuracy/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ We evaluate the WER (Word Error Rate) metric of the ASR microservice.
1414

1515
### Launch ASR microservice
1616

17-
Launch the ASR microserice with the following commands. For more details please refer to [doc](https://github.com/opea-project/GenAIComps/tree/main/comps/asr).
17+
Launch the ASR microserice with the following commands. For more details please refer to [doc](https://github.com/opea-project/GenAIComps/tree/main/comps/asr/whisper/README.md).
1818

1919
```bash
2020
git clone https://github.com/opea-project/GenAIComps

Diff for: AudioQnA/kubernetes/intel/README_gmc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This document outlines the deployment process for a AudioQnA application utilizi
44

55
The AudioQnA Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere.
66

7-
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.
7+
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.
88

99

1010
The AudioQnA application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not starts them and then proceeds to connect them. When the AudioQnA pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular `asr`, `tts`, and `llm`.

Diff for: ChatQnA/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,7 @@ Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions o
240240

241241
Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
242242

243-
Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi.
243+
Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi.
244244

245245
### Deploy ChatQnA on AI PC
246246

@@ -306,7 +306,7 @@ Two ways of consuming ChatQnA Service:
306306

307307
## Troubleshooting
308308

309-
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
309+
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
310310

311311
```bash
312312
http_proxy="" curl ${host_ip}:6006/embed -X POST -d '{"inputs":"What is Deep Learning?"}' -H 'Content-Type: application/json'

Diff for: ChatQnA/benchmark/performance/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ find . -name '*.yaml' -type f -exec sed -i "s#\$(RERANK_MODEL_ID)#${RERANK_MODEL
9090

9191
### Benchmark tool preparation
9292

93-
The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master.
93+
The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/README.md) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master.
9494

9595
```bash
9696
# on k8s-master node

Diff for: ChatQnA/kubernetes/intel/README_gmc.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline components on Intel Xeon server and Gaudi machines.
44

5-
The ChatQnA Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere.
5+
The ChatQnA Service leverages a Kubernetes operator called genai-microservices-connector (GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere.
66

7-
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.
7+
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.
88

99

1010
The ChatQnA application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not starts them and then proceeds to connect them. When the ChatQnA RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular `embedding`, `retriever`, `rerank`, and `llm`.

Diff for: CodeGen/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions o
106106

107107
Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
108108

109-
Refer to the [CodeGen helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi.
109+
Refer to the [CodeGen helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen/README.md) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi.
110110

111111
## Consume CodeGen Service
112112

@@ -128,7 +128,7 @@ Two ways of consuming CodeGen Service:
128128

129129
## Troubleshooting
130130

131-
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
131+
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
132132

133133
```bash
134134
http_proxy=""

Diff for: CodeGen/benchmark/accuracy/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ We evaluate accuracy by [bigcode-evaluation-harness](https://github.com/bigcode-
88

99
### Launch CodeGen microservice
1010

11-
Please refer to [CodeGen Examples](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen), follow the guide to deploy CodeGen megeservice.
11+
Please refer to [CodeGen Examples](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/README.md), follow the guide to deploy CodeGen megeservice.
1212

1313
Use `curl` command to test codegen service and ensure that it has started properly
1414

Diff for: CodeGen/kubernetes/intel/README_gmc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This document outlines the deployment process for a Code Generation (CodeGen) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines.
44

5-
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
5+
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
66

77
If you have only Intel Xeon machines you could use the codegen_xeon.yaml file or if you have a Gaudi cluster you could use codegen_gaudi.yaml
88
In the below example we illustrate on Xeon.

Diff for: CodeTrans/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ Refer to the [Code Translation Kubernetes Guide](./kubernetes/intel/README.md)
9999

100100
Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
101101

102-
Refer to the [CodeTrans helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans) for instructions on deploying CodeTrans into Kubernetes on Xeon & Gaudi.
102+
Refer to the [CodeTrans helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans/README.md) for instructions on deploying CodeTrans into Kubernetes on Xeon & Gaudi.
103103

104104
## Consume Code Translation Service
105105

@@ -121,7 +121,7 @@ By default, the UI runs on port 5173 internally.
121121

122122
## Troubleshooting
123123

124-
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeTrans/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
124+
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeTrans/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
125125

126126
```bash
127127
http_proxy=""

Diff for: CodeTrans/kubernetes/intel/README_gmc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This document outlines the deployment process for a Code Translation (CodeTran) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines.
44

5-
Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
5+
Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
66

77
If you have only Intel Xeon machines you could use the codetrans_xeon.yaml file or if you have a Gaudi cluster you could use codetrans_gaudi.yaml
88
In the below example we illustrate on Xeon.

Diff for: DocIndexRetriever/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,5 @@ DocRetriever are the most widely adopted use case for leveraging the different m
44

55
## We provided DocRetriever with different deployment infra
66

7-
- [docker xeon version](docker_compose/intel/cpu/xeon/) => minimum endpoints, easy to setup
8-
- [docker gaudi version](docker_compose/intel/hpu/gaudi/) => with extra tei_gaudi endpoint, faster
7+
- [docker xeon version](docker_compose/intel/cpu/xeon/README.md) => minimum endpoints, easy to setup
8+
- [docker gaudi version](docker_compose/intel/hpu/gaudi/README.md) => with extra tei_gaudi endpoint, faster

Diff for: DocSum/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Currently we support two ways of deploying Document Summarization services with
2121
docker pull opea/docsum:latest
2222
```
2323

24-
2. Start services using the docker images `built from source`: [Guide](./docker_compose)
24+
2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose)
2525

2626
### Required Models
2727

@@ -98,7 +98,7 @@ Refer to [Kubernetes deployment](./kubernetes/intel/README.md)
9898

9999
Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
100100

101-
Refer to the [DocSum helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum) for instructions on deploying DocSum into Kubernetes on Xeon & Gaudi.
101+
Refer to the [DocSum helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum/README.md) for instructions on deploying DocSum into Kubernetes on Xeon & Gaudi.
102102

103103
### Workflow of the deployed Document Summarization Service
104104

@@ -143,7 +143,7 @@ Two ways of consuming Document Summarization Service:
143143

144144
## Troubleshooting
145145

146-
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
146+
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
147147

148148
```bash
149149
http_proxy=""

Diff for: DocSum/kubernetes/intel/README_gmc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
This document outlines the deployment process for a Document Summary (DocSum) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines.
44
The DocSum Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file, in addition it allows the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private clouds elsewhere.
55

6-
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
6+
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
77

88
The DocSum application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not it starts them and then proceeds to connect them. When the DocSum RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular embedding, retriever, rerank, and llm.
99

Diff for: FaqGen/benchmark/accuracy/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ python get_context.py
1616

1717
### Launch FaQGen microservice
1818

19-
Please refer to [FaQGen microservice](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/faq-generation/tgi), set up an microservice endpoint.
19+
Please refer to [FaQGen microservice](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/faq-generation/tgi/langchain/README.md), set up an microservice endpoint.
2020

2121
```
2222
export FAQ_ENDPOINT = "http://${your_ip}:9000/v1/faqgen"

Diff for: LEGAL_INFORMATION.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ Generative AI Examples is licensed under [Apache License Version 2.0](http://www
99
This software includes components that have separate copyright notices and licensing terms.
1010
Your use of the source code for these components is subject to the terms and conditions of the following licenses.
1111

12-
- [Third Party Programs](/third-party-programs.txt)
12+
- [Third Party Programs](third-party-programs.txt)
1313

14-
See the accompanying [license](/LICENSE) file for full license text and copyright notices.
14+
See the accompanying [license](LICENSE) file for full license text and copyright notices.
1515

1616
## Citation
1717

Diff for: README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ Deployment are based on released docker images by default, check [docker image l
3030
#### Prerequisite
3131

3232
- For Docker Compose based deployment, you should have docker compose installed. Refer to [docker compose install](https://docs.docker.com/compose/install/).
33-
- For Kubernetes based deployment, we provide 3 ways from the easiest manifests to powerful [GMC](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector) based deployment.
34-
- You should have a kubernetes cluster ready for use. If not, you can refer to [k8s install](https://github.com/opea-project/docs/tree/main/guide/installation/k8s_install) to deploy one.
33+
- For Kubernetes based deployment, we provide 3 ways from the easiest manifests to powerful [GMC](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md) based deployment.
34+
- You should have a kubernetes cluster ready for use. If not, you can refer to [k8s install](https://github.com/opea-project/docs/tree/main/guide/installation/k8s_install/README.md) to deploy one.
3535
- (Optional) You should have GMC installed to your kubernetes cluster if you want to try with GMC. Refer to [GMC install](https://github.com/opea-project/docs/blob/main/guide/installation/gmc_install/gmc_install.md) for more information.
3636
- (Optional) You should have Helm (version >= 3.15) installed if you want to deploy with Helm Charts. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
3737

@@ -68,4 +68,4 @@ Thank you for being a part of this journey. We can't wait to see what we can ach
6868

6969
- [Code of Conduct](https://github.com/opea-project/docs/tree/main/community/CODE_OF_CONDUCT.md)
7070
- [Security Policy](https://github.com/opea-project/docs/tree/main/community/SECURITY.md)
71-
- [Legal Information](/LEGAL_INFORMATION.md)
71+
- [Legal Information](LEGAL_INFORMATION.md)

Diff for: SearchQnA/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Currently we support two ways of deploying SearchQnA services with docker compos
3232
docker pull opea/searchqna:latest
3333
```
3434

35-
2. Start services using the docker images `built from source`: [Guide](./docker_compose)
35+
2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/SearchQnA/docker_compose/)
3636

3737
### Setup Environment Variable
3838

@@ -110,7 +110,7 @@ Two ways of consuming SearchQnA Service:
110110

111111
## Troubleshooting
112112

113-
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
113+
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
114114

115115
```bash
116116
http_proxy=""

0 commit comments

Comments
 (0)