Skip to content

Commit

Permalink
refactor: improve README in all packages (#230)
Browse files Browse the repository at this point in the history
  • Loading branch information
sagojez authored Jan 28, 2025
1 parent f7dbbe6 commit 066a93e
Show file tree
Hide file tree
Showing 6 changed files with 20 additions and 73 deletions.
6 changes: 6 additions & 0 deletions api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,20 @@ Create a .env file in the root of the project with the following environment:
```bash
RUST_LOG=info
ENVIRONMENT=development

EVENT_DATABASE_URL=mongodb://localhost:27017/?directConnection=true
CONTROL_DATABASE_URL=mongodb://localhost:27017/?directConnection=true
CONTEXT_DATABASE_URL=mongodb://localhost:27017/?directConnection=true
UDM_DATABASE_URL=mongodb://localhost:27017/?directConnection=true
CONTROL_DATABASE_URL=mongodb://localhost:27017/?directConnection=true

EVENT_DATABASE_NAME=events-service
CONTEXT_DATABASE_NAME=events-service
CONTROL_DATABASE_NAME=events-service
UDM_DATABASE_NAME=events-service

IOS_CRYPTO_SECRET=jbu7WKZiBUa648OAyPAb8h8FJoX03Ihz
SECRETS_SERVICE_PROVIDER=ios-kms
```

Then run the following command:
Expand Down
5 changes: 0 additions & 5 deletions archiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,6 @@ This command will monitor changes in the project and execute the archiver servic

## Running the Tests

To run the tests for the archiver, use:

```bash
cargo nextest run --all-features
```

This will execute all tests in the project, ensuring that the archiving process works as expected across all features.

29 changes: 10 additions & 19 deletions database/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ $ cargo watch -x run -q | bunyan

By default, the service runs on port **5005**, but this can be configured through environment variables.

## Integrating a New Database
## Integrating a new database

To add support for a new database, follow these steps:

Expand All @@ -33,32 +33,29 @@ pub enum DatabaseConnectionType {
2. **Create the necessary configuration and add it to the configuration loader:**

```rust
#[derive(Envconfig, Clone, Serialize, Deserialize, PartialEq)]
pub struct DatabaseConnectionConfig {
pub struct DatabasePodConfig {
#[envconfig(from = "WORKER_THREADS")]
pub worker_threads: Option<usize>,
#[envconfig(from = "INTERNAL_SERVER_ADDRESS", default = "0.0.0.0:5005")]
pub address: SocketAddr,
#[envconfig(from = "ENVIRONMENT", default = "development")]
pub environment: Environment,
#[envconfig(nested = true)]
pub postgres_config: PostgresConfig,
#[envconfig(from = "DATABASE_CONNECTION_TYPE", default = "postgres")]
#[envconfig(from = "CONNECTIONS_URL", default = "http://localhost:3005")]
pub connections_url: String,
#[envconfig(from = "DATABASE_CONNECTION_TYPE", default = "postgresql")]
pub database_connection_type: DatabaseConnectionType,
#[envconfig(from = "CONNECTION_ID")]
pub connection_id: String
pub connection_id: String,
#[envconfig(from = "JWT_SECRET")]
pub jwt_secret: Option<String>,
}
```

3. **Implement the `Storage` trait:**

```rust
#[async_trait]
pub trait Storage: Send + Sync {
async fn execute_raw(
&self,
query: &str,
) -> Result<Vec<HashMap<String, Value>>, PicaError>;
async fn execute_raw(&self, query: &str) -> Result<Vec<HashMap<String, Value>>, PicaError>;

async fn probe(&self) -> Result<bool, PicaError>;
}
Expand All @@ -69,9 +66,8 @@ Be mindful that implementing this trait usually requires creating serializers fo
4. **Implement the `Initializer` trait:**

```rust
#[async_trait]
pub trait Initializer {
async fn init(config: &DatabaseConnectionConfig) -> Result<Server, anyhow::Error>;
async fn init(config: &DatabasePodConfig) -> Result<Server, anyhow::Error>;
}
```

Expand All @@ -80,11 +76,6 @@ tests to verify the functionality of the new storage type.

## Running the Tests

To run the test suite for the storage service, execute:

```bash
cargo nextest run --all-features
```

This command will run all tests associated with the storage functionality, ensuring correct behavior across various scenarios.

44 changes: 0 additions & 44 deletions entities/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,47 +7,3 @@ For a more detailed explanation, please refer to the code itself.

Pica domain seeks to hold the common data structures used on the [pica](https://github.com/picahq/pica) repository. Along with these DS, it also
has some utilities to create `id` and manipulate `json` as well as general purpose services.

### Environment Variables

The following environment variables are introduced, not necessarily used, by this project:

- `REDIS_URL`: The URL to connect to the Redis server. Default is `redis://localhost:6379`.
- `REDIS_QUEUE_NAME`: The name of the queue to be used in the Redis server. Default is `events`.
- `REDIS_EVENT_THROUGHPUT_KEY`: The key to be used to store the event throughput in the Redis server. Default is `event_throughput`.
- `REDIS_API_THROUGHPUT_KEY`: The key to be used to store the API throughput in the Redis server. Default is `api_throughput`.

- `CONTROL_DATABASE_URL`: The URL to connect to the control database. Default is `mongodb://localhost:27017`.
- `CONTROL_DATABASE_NAME`: The name of the control database. Default is `database`.
- `UDM_DATABASE_URL`: The URL to connect to the UDM database. Default is `mongodb://localhost:27017`.
- `UDM_DATABASE_NAME`: The name of the UDM database. Default is `udm`.
- `EVENT_DATABASE_URL`: The URL to connect to the event database. Default is `mongodb://localhost:27017`.
- `EVENT_DATABASE_NAME`: The name of the event database. Default is `database`.
- `CONTEXT_DATABASE_URL`: The URL to connect to the context database. Default is `mongodb://localhost:27017`.
- `CONTEXT_DATABASE_NAME`: The name of the context database. Default is `database`.
- `CONTEXT_COLLECTION_NAME`: The name of the context collection

- `ENVIRONMENT`: The environment in which the application is running. Default is `development`.

- `OPENAI_API_KEY`: The API key to connect to the OpenAI server

- `SECRETS_SERVICE_BASE_URL`: The base URL to connect to the secrets service. Default is `https://secrets-service-development-b2nnzrt2eq-uk.a.run.app/`.
- `SECRETS_SERVICE_GET_PATH`: The path to get secrets in the secrets service. Default is `v1/secrets/get/`.
- `SECRETS_SERVICE_CREATE_PATH`: The path to create secrets in the secrets service. Default is `v1/secrets/create/`.

- `WATCHDOG_EVENT_TIMEOUT`: The event timeout to be used in the watchdog service. Default is `300`.
- `WATCHDOG_POLL_DURATION`: The poll duration to be used in the watchdog service. Default is `10`.

### Services

- Caller Client: A client to make requests to external APIs. It is used to make requests to external APIs and return the response. It is used by the `pica` repository to make requests to external APIs.
- Secrets Client: A client to interact with the secrets service. It is used to get and create secrets in the secrets service. It is used by the `pica` repository to get and create secrets.
- Watchdog Client: A client to start and stop the watchdog service. It is used to start the watchdog service. It is used by the `pica` repository to start and stop the watchdog service.

### Data Structures

Please refer to the code itself for a detailed explanation of the data structures.

### Utilities

- Hash Data: A utility to hash data. It is used to hash data and return the hash. It is used by the `pica` repository to hash data.
6 changes: 1 addition & 5 deletions unified/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,10 @@ The core logic for unification APIs and user request handling in the Pica projec

Pica Unified provides the core functionality for managing and processing unification APIs, ensuring proper handling of user requests across the Pica ecosystem. While this service is not directly runnable, its logic forms the backbone of the unification process within the system.

For detailed usage and API references, visit the [API documentation](https://docs.picaos.com).
For detailed usage and API references, visit the [API documentation](https://docs.picaos.com/api-reference/introduction).

## Running the Tests

To ensure the correctness of the unification logic, run the following test suite:

```bash
cargo nextest run --all-features
```

This command will execute all tests related to the unification APIs and user request handling, ensuring that the system behaves as expected under various scenarios.
3 changes: 3 additions & 0 deletions watchdog/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Pica Watchdog

Takes necessary action to ensure that the rate limiter keeps working by periodically cleaning Redis keys related to API and Event throughput.

0 comments on commit 066a93e

Please sign in to comment.