Files
Kontia/README.md
Marcelo Dares ea23136288 changes
2026-04-29 01:15:50 +02:00

229 lines
6.7 KiB
Markdown

This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
## Getting Started
First, run the development server:
```bash
npm run dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
## Learn More
To learn more about Next.js, take a look at the following resources:
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
## Deploy on Vercel
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
## OCR Setup (Recommended)
PDF analysis uses direct text extraction first. If text is insufficient (common in scanned PDFs), the API falls back to OCR with `ocrmypdf`.
Install host dependencies (Ubuntu/Debian):
```bash
sudo apt-get update
sudo apt-get install -y ocrmypdf poppler-utils tesseract-ocr tesseract-ocr-spa tesseract-ocr-eng
```
Verify:
```bash
ocrmypdf --version
```
If OCR is not available, the API returns a specific error (`OCR_UNAVAILABLE`) with install guidance.
## AI Extraction for Acta Constitutiva
Onboarding now uses AI as the default extraction engine after PDF text analysis:
1. Extract direct text from PDF.
2. If text is insufficient, run OCR.
3. Send extracted text to OpenAI to map fields and lookup dictionary.
4. If AI fails, fallback extraction is used so onboarding is not blocked.
Environment variables:
```bash
OPENAI_API_KEY=sk-...
OPENAI_ACTA_MODEL=gpt-4.1-mini
OPENAI_ACTA_TIMEOUT_MS=60000
OPENAI_ACTA_MAX_CHARS=45000
```
## AI Assist Modules (M1/M2/M3/M7/M10)
The platform now includes assist-only AI helpers for:
- `M1` Diagnostic suggestions per question (apply/dismiss).
- `M2` Strategic insights and suggested field values (explicit apply only).
- `M3` AI fit + blended score layered on deterministic recommendations.
- `M7` Compliance playbook recommendations (no auto state/severity updates).
- `M10` Auditor-style findings simulation from dossier + simulation data.
Deterministic scoring and compliance engines remain the source of truth.
Environment variables:
```bash
OPENAI_SMART_MODEL=gpt-4.1
OPENAI_SMART_FALLBACK_MODEL=gpt-4.1-mini
OPENAI_SMART_TIMEOUT_MS=75000
OPENAI_SMART_MAX_CHARS=55000
# Optional module overrides
OPENAI_M1_MODEL=gpt-4.1
OPENAI_M2_MODEL=gpt-4.1
OPENAI_M3_MODEL=gpt-4.1
OPENAI_M7_MODEL=gpt-4.1
OPENAI_M10_MODEL=gpt-4.1
```
Traceability:
- AI suggestions are persisted in `AiSuggestion` with request/response metadata.
- Use `POST /api/ai/suggestions/{id}/decision` with `{ "decision": "accept" | "dismiss" }` to persist user action.
## Mercado Pago Checkout (Planes M2-M10)
Se agrego integracion de checkout para vender planes:
- `Plan 1` -> Modulos `2-4`
- `Plan 2` -> Modulos `5-7`
- `Plan 3` -> Modulos `8-10`
Variables requeridas:
```bash
MP_ACCESS_TOKEN=TEST-... # token de prueba o produccion
MP_API_BASE_URL=https://api.mercadopago.com
MP_PLAN_DURATION_DAYS=30
```
Flujo:
- `GET /api/payments/checkout?plan=plan-1|plan-2|plan-3` crea preferencia y redirige a Mercado Pago.
- `GET /api/payments/checkout/return` procesa callback y regresa a `/dashboard`.
- `POST /api/payments/mercadopago/webhook` sincroniza pagos aprobados.
DB:
- Nuevos modelos Prisma: `ModulePlanPurchase`, `ModulePlanSubscription`.
- Ejecuta migraciones antes de usar el checkout en un entorno nuevo.
## Local CLI Script (PDF -> OCR/text -> AI)
Run:
```bash
npm run acta:analyze:ai -- ./path/to/acta.pdf
```
Optional output file:
```bash
npm run acta:analyze:ai -- ./path/to/acta.pdf --out ./result.json
```
## Licita Ya API Key Test
Add these vars to `.env`:
```bash
LICITAYA_API_KEY=your-licitaya-api-key
LICITAYA_BASE_URL=https://<licitaya-base-url>
LICITAYA_TEST_ENDPOINT=/tender/SCRZJ
LICITAYA_ACCEPT=application/json
LICITAYA_TIMEOUT_MS=20000
LICITAYA_ALLOW_EMPTY_SEARCH=true
LICITAYA_ENABLED=true
LICITAYA_ITEMS_PER_PAGE=50
LICITAYA_MAX_PAGES_PER_RUN=3
LICITAYA_SYNC_INTERVAL_HOURS=12
LICITAYA_DATE_OFFSET_DAYS=1
LICITAYA_DATE_FALLBACK_WINDOW_DAYS=3
LICITAYA_FALLBACK_TENDER_IDS=SCRZJ
```
Run the connection test:
```bash
npm run licitaya:test
```
Override values on demand:
```bash
npm run licitaya:test -- --base-url https://www.licitaya.com.mx/api/v1 --endpoint /tender/search?items=10&page=1 --accept application/json
```
You can also pass a full URL in `--endpoint`:
```bash
npm run licitaya:test -- --endpoint https://<licitaya-base-url>/<country-endpoint>
```
Common Licita Ya lookups:
```bash
# Search tenders (keyword + filters)
npm run licitaya:test -- --endpoint '/tender/search?keyword=computadora,monitor&state=NLE,XX&items=10&page=1&order=1'
# Search by date (YYYYmmdd)
npm run licitaya:test -- --endpoint '/tender/search?date=20260313&items=10&page=1'
# Get one tender by ID
npm run licitaya:test -- --endpoint '/tender/SCRZJ'
```
Country base URL (pick one only):
- Mexico: `https://www.licitaya.com.mx/api/v1`
- Argentina: `https://www.licitaya.com.ar/api/v1`
Notes:
- The script sends your key in header `X-API-KEY`.
- It prints status code + response preview.
- For `/tender/search`, you can allow an empty 404 as connectivity success with `LICITAYA_ALLOW_EMPTY_SEARCH=true` or `--allow-empty-search`.
- A non-2xx response exits with code `1` unless empty-search mode is allowed.
## Compliance Cron + Persistence (M7)
This project now persists official-regulations verification state and suggestions in Postgres.
Apply migration and regenerate Prisma client:
```bash
npm run prisma:migrate
npm run prisma:generate
```
Scheduler endpoints:
- `GET/POST /api/cron/licitations-sync` (includes periodic regulations verification by default)
- `GET/POST /api/cron/regulations-verify` (regulations-only run)
Accepted auth for cron routes:
- Header `x-sync-token: $LICITATIONS_SYNC_TOKEN`
- Header `Authorization: Bearer $CRON_SECRET` (Vercel cron compatible)
- Query param `?token=$LICITATIONS_SYNC_TOKEN` (fallback)
Vercel schedules are included in [`vercel.json`](./vercel.json).