Skip to content

Commit 7c48797

Browse files
v0.10.32 (run-llama#13119)
1 parent fcf1913 commit 7c48797

File tree

9 files changed

+117
-27
lines changed

9 files changed

+117
-27
lines changed

CHANGELOG.md

+31
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,36 @@
11
# ChangeLog
22

3+
## [2024-04-25]
4+
5+
### `llama-index-core` [0.10.32]
6+
7+
- Corrected wrong output type for `OutputKeys.from_keys()` (#13086)
8+
- add run_jobs to aws base embedding (#13096)
9+
- allow user to customize the keyword extractor prompt template (#13083)
10+
- (CondenseQuestionChatEngine) Do not condense the question if there's no conversation history (#13069)
11+
- QueryPlanTool: Execute tool calls in subsequent (dependent) nodes in the query plan (#13047)
12+
- Fix for fusion retriever sometime return Nonetype query(s) before similarity search (#13112)
13+
14+
### `llama-index-embeddings-ipex-llm` [0.1.1]
15+
16+
- Support llama-index-embeddings-ipex-llm for Intel GPUs (#13097)
17+
18+
### `llama-index-packs-raft-dataset` [0.1.4]
19+
20+
- Fix bug in raft dataset generator - multiple system prompts (#12751)
21+
22+
### `llama-index-readers-microsoft-sharepoint` [0.2.1]
23+
24+
- Add access control related metadata to SharePoint reader (#13067)
25+
26+
### `llama-index-vector-stores-pinecone` [0.1.6]
27+
28+
- Nested metadata filter support (#13113)
29+
30+
### `llama-index-vector-stores-qdrant` [0.2.8]
31+
32+
- Nested metadata filter support (#13113)
33+
334
## [2024-04-23]
435

536
### `llama-index-core` [0.10.31]

SECURITY.md

+33-5
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,39 @@
11
# Security Policy
22

3-
## Supported Versions
3+
Before reporting a vulnerability, please review In-Scope Targets and Out-of-Scope Targets below.
44

5-
Currently, we support security patches by committing changes and bumping the version published to PyPi.
5+
## In-Scope Targets
66

7-
## Reporting a Vulnerability
7+
The following packages and repositories are eligible for bug bounties:
88

9-
Found a vulnerability? Please email us:
9+
- llama-index-core
10+
- llama-index-integrations (see exceptions)
11+
- llama-index-networks
1012

11-
13+
## Out of Scope Targets
14+
15+
All out of scope targets defined by huntr as well as:
16+
17+
- **llama-index-experimental**: This repository is for experimental code and is not
18+
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
19+
time and published with no bounty attached.
20+
- **llama-index-integrations/tools**: Community contributed tools are not eligible for bug
21+
bounties. Generally tools interact with the real world. Developers are expected to
22+
understand the security implications of their code and are responsible for the security
23+
of their tools.
24+
- Code documented with security notices. This will be decided done on a case by
25+
case basis, but likely will not be eligible for a bounty as the code is already
26+
documented with guidelines for developers that should be followed for making their
27+
application secure.
28+
29+
## Reporting LlamaCloud Vulnerabilities
30+
31+
Please report security vulnerabilities associated with LlamaCloud by email to `[email protected]`.
32+
33+
- LlamaCloud site: https://cloud.llamaindex.ai
34+
- LlamaCloud API: https://api.cloud.llamaindex.ai/docs
35+
- LlamaParse client: https://github.com/run-llama/llama_parse
36+
37+
### Other Security Concerns
38+
39+
For any other security concerns, please contact us at `[email protected]`.

docs/docs/CHANGELOG.md

+31
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,36 @@
11
# ChangeLog
22

3+
## [2024-04-25]
4+
5+
### `llama-index-core` [0.10.32]
6+
7+
- Corrected wrong output type for `OutputKeys.from_keys()` (#13086)
8+
- add run_jobs to aws base embedding (#13096)
9+
- allow user to customize the keyword extractor prompt template (#13083)
10+
- (CondenseQuestionChatEngine) Do not condense the question if there's no conversation history (#13069)
11+
- QueryPlanTool: Execute tool calls in subsequent (dependent) nodes in the query plan (#13047)
12+
- Fix for fusion retriever sometime return Nonetype query(s) before similarity search (#13112)
13+
14+
### `llama-index-embeddings-ipex-llm` [0.1.1]
15+
16+
- Support llama-index-embeddings-ipex-llm for Intel GPUs (#13097)
17+
18+
### `llama-index-packs-raft-dataset` [0.1.4]
19+
20+
- Fix bug in raft dataset generator - multiple system prompts (#12751)
21+
22+
### `llama-index-readers-microsoft-sharepoint` [0.2.1]
23+
24+
- Add access control related metadata to SharePoint reader (#13067)
25+
26+
### `llama-index-vector-stores-pinecone` [0.1.6]
27+
28+
- Nested metadata filter support (#13113)
29+
30+
### `llama-index-vector-stores-qdrant` [0.2.8]
31+
32+
- Nested metadata filter support (#13113)
33+
334
## [2024-04-23]
435

536
### `llama-index-core` [0.10.31]

llama-index-core/llama_index/core/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""Init file of LlamaIndex."""
22

3-
__version__ = "0.10.31"
3+
__version__ = "0.10.32"
44

55
import logging
66
from logging import NullHandler

llama-index-core/pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ name = "llama-index-core"
4343
packages = [{include = "llama_index"}]
4444
readme = "README.md"
4545
repository = "https://github.com/run-llama/llama_index"
46-
version = "0.10.31"
46+
version = "0.10.32"
4747

4848
[tool.poetry.dependencies]
4949
SQLAlchemy = {extras = ["asyncio"], version = ">=1.4.49"}

llama-index-integrations/embeddings/llama-index-embeddings-azure-openai/pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
2727
license = "MIT"
2828
name = "llama-index-embeddings-azure-openai"
2929
readme = "README.md"
30-
version = "0.1.7"
30+
version = "0.1.8"
3131

3232
[tool.poetry.dependencies]
3333
python = ">=3.8.1,<4.0"

llama-index-integrations/embeddings/llama-index-embeddings-openai/pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
2727
license = "MIT"
2828
name = "llama-index-embeddings-openai"
2929
readme = "README.md"
30-
version = "0.1.8"
30+
version = "0.1.9"
3131

3232
[tool.poetry.dependencies]
3333
python = ">=3.8.1,<4.0"

poetry.lock

+16-16
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

pyproject.toml

+2-2
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ name = "llama-index"
4444
packages = [{from = "_llama-index", include = "llama_index"}]
4545
readme = "README.md"
4646
repository = "https://github.com/run-llama/llama_index"
47-
version = "0.10.31"
47+
version = "0.10.32"
4848

4949
[tool.poetry.dependencies]
5050
python = ">=3.8.1,<4.0"
@@ -57,7 +57,7 @@ llama-index-agent-openai = ">=0.1.4,<0.3.0"
5757
llama-index-readers-file = "^0.1.4"
5858
llama-index-readers-llama-parse = "^0.1.2"
5959
llama-index-indices-managed-llama-cloud = "^0.1.2"
60-
llama-index-core = "^0.10.31"
60+
llama-index-core = "^0.10.32"
6161
llama-index-multi-modal-llms-openai = "^0.1.3"
6262
llama-index-cli = "^0.1.2"
6363

0 commit comments

Comments
 (0)