Skip to content

Commit 3104a0b

Browse files
authored
Add OPEA v1.4 release notes (#392)
* Add OPEA v1.4 release notes Signed-off-by: Yi Yao <[email protected]> * Updated the latest status for v1.4 release notes. Signed-off-by: Yi Yao <[email protected]> --------- Signed-off-by: Yi Yao <[email protected]>
1 parent ddf571f commit 3104a0b

File tree

1 file changed

+127
-0
lines changed

1 file changed

+127
-0
lines changed

release_notes/v1.4.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# OPEA Release Notes v1.4
2+
We are excited to announce the release of OPEA version 1.4, which includes significant contributions from the open-source community. This release addresses over 330 pull requests.
3+
4+
More information about how to get started with OPEA v1.4 can be found on the [Getting Started](https://opea-project.github.io/latest/index.html) page. All project source code is maintained in the [opea-project organization](https://github.com/opea-project). To pull Docker images, please access the [Docker Hub](https://hub.docker.com/u/opea). For instructions on deploying Helm Charts, please refer to the [guide](https://github.com/opea-project/GenAIInfra/tree/v1.4/helm-charts#readme).
5+
6+
## Table of Contents
7+
- [OPEA Release Notes v1.4](#opea-release-notes-v14)
8+
- [Table of Contents](#table-of-contents)
9+
- [What's New in OPEA v1.4](#whats-new-in-opea-v14)
10+
- [Advanced Agent Capabilities](#advanced-agent-capabilities)
11+
- [Components as MCP Servers](#components-as-mcp-servers)
12+
- [KubeAI Operator for OPEA](#kubeai-operator-for-opea)
13+
- [New GenAI Capabilities](#new-genai-capabilities)
14+
- [Better User Experience](#better-user-experience)
15+
- [Newly Supported Models](#newly-supported-models)
16+
- [Newly Supported Hardware](#newly-supported-hardware)
17+
- [Newly Supported OS](#newly-supported-os)
18+
- [Updated Dependencies](#updated-dependencies)
19+
- [Changes to Default Behavior](#changes-to-default-behavior)
20+
- [Validated Hardware](#validated-hardware)
21+
- [Validated Software](#validated-software)
22+
- [Known Issues](#known-issues)
23+
- [Full Changelogs](#full-changelogs)
24+
- [Contributors](#contributors)
25+
- [Contributing Organizations](#contributing-organizations)
26+
- [Individual Contributors](#individual-contributors)
27+
28+
## What's New in OPEA v1.4
29+
30+
This release includes new features, optimizations, and user-focused updates.
31+
32+
### Advanced Agent Capabilities
33+
34+
- <b>MCP (Model Context Protocol) Support</b>: The OPEA agent now supports the MCP, allowing for standardized and more efficient integration with external data and services. ([GenAIComps#1678](https://github.com/opea-project/GenAIComps/pull/1678), [GenAIComps#1810](https://github.com/opea-project/GenAIComps/pull/1810))
35+
36+
- <b>Deep Research Agent</b>: The [example](https://github.com/opea-project/GenAIExamples/tree/v1.4/DeepResearchAgent) is designed to handle complex, multi-step research. It leverages [langchain-ai/open_deep_research](https://github.com/langchain-ai/open_deep_research) and supports Intel Gaudi accelerators. ([GenAIExamples#2117](https://github.com/opea-project/GenAIExamples/pull/2117))
37+
38+
### Components as MCP Servers
39+
OPEA components can now serve as Model Context Protocol (MCP) servers, allowing external MCP-compatible frameworks and applications to integrate with OPEA seamlessly. ([GenAIComps#1652](https://github.com/opea-project/GenAIComps/issues/1652))
40+
41+
### KubeAI Operator for OPEA
42+
The KubeAI Operator now features an improved autoscaler, monitoring support, optimized resource placement via [NRI plugins](https://github.com/containers/nri-plugins), and expanded support for new models on Gaudi. ([GenAIInfra#967](https://github.com/opea-project/GenAIInfra/pull/967), [GenAIInfra#1052](https://github.com/opea-project/GenAIInfra/pull/1052), [GenAIInfra#1054](https://github.com/opea-project/GenAIInfra/pull/1054), [GenAIInfra#1089](https://github.com/opea-project/GenAIInfra/pull/1089), [GenAIInfra#1113](https://github.com/opea-project/GenAIInfra/pull/1113), [GenAIInfra#1144](https://github.com/opea-project/GenAIInfra/pull/1144), [GenAIInfra#1150](https://github.com/opea-project/GenAIInfra/pull/1150))
43+
44+
### New GenAI Capabilities
45+
- <b>Fine-Tuning of Reasoning Models</b>: This feature is compatible with the dataset format used in [FreedomIntelligence/medical-o1-reasoning-SFT](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT), enabling you to customize models with your own data. ([GenAIComps#1839](https://github.com/opea-project/GenAIComps/pull/1839))
46+
- <b>HybridRAG</b>: Combined GraphRAG (knowledge graph-based retrieval) and VectorRAG (vector database retrieval) for enhanced accuracy and contextual relevance. ([GenAIExamples#1968](https://github.com/opea-project/GenAIExamples/pull/1968))
47+
- <b>LLM Router</b>: LLM Router decides which downstream LLM serving endpoint is best suited for an incoming prompt. ([GenAIComps#1716](https://github.com/opea-project/GenAIComps/pull/1716))
48+
- <b>OPEA Store</b>: Redis and MongoDB have been integrated into OPEA Store. ([GenAIComps#1816](https://github.com/opea-project/GenAIComps/pull/1816), [GenAIComps#1818](https://github.com/opea-project/GenAIComps/pull/1818))
49+
- <b>Guardrails</b>: Added Input/Output Guardrails to enforce content safety and prevent the creation of inappropriate outputs. ([GenAIComps#1798](https://github.com/opea-project/GenAIComps/pull/1798))
50+
- <b>Language Detection</b>: The microservice is used to ensure the pipeline's response matches the query's language. ([GenAIComps#1774](https://github.com/opea-project/GenAIComps/pull/1774))
51+
- <b>Prompt Template</b>: The microservice can dynamically generate system and user prompts based on structured inputs and document context. ([GenAIComps#1826](https://github.com/opea-project/GenAIComps/pull/1826))
52+
- <b>Air-gapped Environment Support</b>: Some OPEA microservices can now be deployed in an air-gapped Docker environment. ([GenAIComps#1480](https://github.com/opea-project/GenAIComps/issues/1480))
53+
- <b>Remote Inference Endpoints Support</b>: Added support for remote inference endpoints for OPEA examples. ([GenAIExamples#1973](https://github.com/opea-project/GenAIExamples/issues/1973))
54+
55+
### Better User Experience
56+
- <b>One-click Deployment</b>: You can now deploy 8 OPEA examples with one click. ChatQnA can deploy in an air-gapped Docker environment. ([GenAIExamples#1727](https://github.com/opea-project/GenAIExamples/issues/1727))
57+
- <b>GenAIStudio</b>: Added support for drag-and-drop creation of documentation summarization and code generation applications. ([GenAIStudio#61](https://github.com/opea-project/GenAIStudio/pull/61))
58+
- <b>Documentation Refinement</b>: Refined READMEs for key examples and components to help readers easily locate documentation tailored to deployment, customization, and hardware. ([GenAIExamples#1673](https://github.com/opea-project/GenAIExamples/issues/1673), [GenAIComps#1398](https://github.com/opea-project/GenAIComps/issues/1398))
59+
60+
### Newly Supported Models
61+
OPEA introduces support for the following models in this release.
62+
63+
| Model | TGI-Gaudi | vLLM-CPU | vLLM-Gaudi | vLLM-ROCm | OVMS | Optimum-Habana | PredictionGuard | SGLANG-CPU |
64+
| --------------------------------------------- | --------- | -------- | ---------- | --------- | -------- | -------------- | --------------- | ------------- |
65+
| meta-llama/Llama-4-Scout-17B-16E-Instruct | - | - | - | - | - | - | - ||
66+
| meta-llama/Llama-4-Maverick-17B-128E-Instruct | - | - | - | - | - | - | - ||
67+
68+
(✓: supported; -: not validated; x: unsupported)
69+
70+
71+
### Newly Supported Hardware
72+
- Support for AMD® EPYC™ has been added for 11 OPEA examples. ([GenAIExamples#2083](https://github.com/opea-project/GenAIExamples/pull/2083))
73+
74+
### Newly Supported OS
75+
- Support for openEuler has been added. ([GenAIExamples#2088](https://github.com/opea-project/GenAIExamples/pull/2088), [GenAIComps#1813](https://github.com/opea-project/GenAIComps/pull/1813))
76+
77+
## Updated Dependencies
78+
79+
| Dependency | Hardware | Scope | Version | Version in OPEA v1.3 | Comments |
80+
|--|--|--|--|--|--|
81+
|huggingface/text-embeddings-inference|all|all supported examples|cpu-1.7|cpu-1.6||
82+
|vllm|Xeon|all supported examples except EdgeCraftRAG|v0.10.0|v0.8.3||
83+
84+
## Changes to Default Behavior
85+
- `CodeTrans`: The default model changed from `mistralai/Mistral-7B-Instruct-v0.3` to `Qwen/Qwen2.5-Coder-7B-Instruct` on Xeon and Gaudi.
86+
87+
88+
## Validated Hardware
89+
- Intel® Gaudi® AI Accelerators (2nd)
90+
- Intel® Xeon® Scalable processor (3rd)
91+
- Intel® Arc™ Graphics GPU (A770)
92+
- AMD® EPYC™ processors (4th, 5th)
93+
94+
## Validated Software
95+
- Docker version 28.3.3
96+
- Docker Compose version v2.39.1
97+
- Intel® Gaudi® software and drivers [v1.21](https://docs.habana.ai/en/v1.21.3/Installation_Guide/)
98+
- Kubernetes v1.32.7
99+
- TEI v1.7
100+
- TGI v2.4.0 (Xeon, EPYC), v2.3.1 (Gaudi), v2.4.1 (ROCm)
101+
- Torch v2.5.1
102+
- Ubuntu 22.04
103+
- vLLM v0.10.0 (Xeon, EPYC), v0.6.6.post1+Gaudi-1.20.0 (Gaudi)
104+
105+
## Known Issues
106+
- [AvatarChatbot](https://github.com/opea-project/GenAIExamples/tree/v1.4/AvatarChatbot) cannot run in a K8s environment due to a functional gap in the wav2clip service. ([GenAIExamples#1506](https://github.com/opea-project/GenAIExamples/pull/1506))
107+
108+
## Full Changelogs
109+
- GenAIExamples: [v1.3...v1.4](https://github.com/opea-project/GenAIExamples/compare/v1.3...v1.4)
110+
- GenAIComps: [v1.3...v1.4](https://github.com/opea-project/GenAIComps/compare/v1.3...v1.4)
111+
- GenAIInfra: [v1.3...v1.4](https://github.com/opea-project/GenAIInfra/compare/v1.3...v1.4)
112+
- GenAIEval: [v1.3...v1.4](https://github.com/opea-project/GenAIEval/compare/v1.3...v1.4)
113+
- GenAIStudio: [v1.3...v1.4](https://github.com/opea-project/GenAIStudio/compare/v1.3...v1.4)
114+
- docs: [v1.3...v1.4](https://github.com/opea-project/docs/compare/v1.3...v1.4)
115+
116+
## Contributors
117+
This release would not have been possible without the contributions of the following organizations and individuals.
118+
119+
### Contributing Organizations
120+
- `AMD`: AMD EPYC support.
121+
- `Bud`: Components as MCP Servers.
122+
- `Intel`: Development and improvements to GenAI examples, components, infrastructure, evaluation, and studio.
123+
- `MariaDB`: Added ChatQnA docker-compose example on Intel Xeon using MariaDB Vector.
124+
- `openEuler`: openEuler OS support.
125+
126+
### Individual Contributors
127+
For a comprehensive list of individual contributors, please refer to the [Full Changelogs](#full-changelogs) section.

0 commit comments

Comments
 (0)