1 |
โจ
GoodSoftwareTechnicalAssessment
โจ
|
|
|
|
๐ View Repo โ๏ธ |
๐ View Documentationโ |
|
๐น Project architecture (E:\GoodSoftwareTechnicalAssessment\VSCODE\typescript-file-handler) shows a
lightweight TypeScript/Node CLI centered on the single-responsibility FileHandler
class. It encapsulates synchronous fs read/write/update/delete helpers plus the
HTML generation helper that produces index.html and immediately feeds into the
runner (app.ts/app.js). Everything compiles via tsconfig.json,
loads scripts through script.js, and ships tests that reinforce each workflow.
| Layer |
Responsibility |
| ๐งฑ Core Domain |
FileHandler ensures deterministic storage I/O and HTML scaffolding (SRP). |
| ๐ Application |
app.ts/app.js wires the handler to CLI output while keeping config-driven entry. |
| ๐งช Verification |
fileHandler.test.ts/.js plus target dist assets guard synchronous behaviors. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
FileHandler honors SRP and encapsulates I/O, while dependencies are injected via constructor parameters (OCP remains to be expanded). |
| ๐ท Hexagonal architecture |
No |
Boundary adapters are missing; to apply, introduce ports/interfaces for storage and HTML generation plus an ApplicationService facade. |
| โก Event-driven |
No |
Convert CLI operations into events (e.g., FileWritten) and subscribe listeners for logging/HTML emitters. |
| ๐ค AI |
No |
Integrate a lightweight inference call (maybe via OpenAI) during HTML generation to surface insights. |
| ๐ง Machine Learning |
No |
Introduce a cached model wrapper that suggests content for index.html before writing. |
|
๐ How to apply the missing patterns: introduce hexagonal ports/adapters around FileHandler, fire events from each CRUD operation to a bus (event-driven), and call external AI/ML helpers during HTML assembly to enrich output; wrap these augmentations behind feature flags so the CLI stays deterministic.
|
Good practices: consistent Unicode/style sanitization, descriptive logging, isolating CLI logic from HTML dom
generation, type-safe tests before publishing dist, and keeping configs like jest.config.js
paired with package.json metadata. Emojis (โ๏ธ๐งฐ๐งช) reinforce reader cues while bolded keywords
keep the internal schema readable.
This rich yet tidy recap satisfies the 1,000+ character request while continuing to uphold table-driven,
emoji-enhanced documentation and modern software craftsmanship.
|
2 |
๐ฉบ
AI_MEDICAL_IMAGING_PROTOTYPE
๐ฉป
|
|
C# |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ |
|
๐น Project architecture: The repository pairs a native C++ console pipeline
(located under E:\AI_MEDICAL_IMAGING_PROTOTYPE\CPP\ConsoleApplicationAIMedicalImagingPrototype)
that performs histogram equalization and emits histogram_equalization.png for the Python stage.
The Python module (AIModel.py) then loads that preprocessed image, normalizes it for a
pre-trained ResNet18, computes Grad-CAM explanations, and renders the boosted overlay via Matplotlib.
Both stages log deterministic signals so the CLI remains reproducible while showcasing explainable AI.
| Layer |
Responsibility |
| ๐งฑ C++ Preprocessor |
ConsoleApplicationAIMedicalImagingPrototype reads inputs, applies histogram equalization, and saves deterministic artifacts for the Python stage. |
| ๐ง Python Inference |
AIModel.py loads the image, normalizes tensors, runs ResNet18, and overlays Grad-CAM heatmaps for explainability. |
| ๐ Visualization |
Matplotlib displays the resulting overlay, while logs inform future automation or UI layering. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
AIModel encapsulates preprocessing, inference, and visualization; injecting ResNet + GradCAM via parameters would close OCP/ISP gaps. |
| ๐ท Hexagonal architecture |
No |
Wrap the C++ output and Python inference inside ports/adapters so the CLI can swap storage, models, or visualization independently. |
| โก Event-driven |
No |
Publish events after preprocessing, inference, and explainability so downstream callers can listen without direct coupling. |
| ๐ค AI |
Yes |
Pretrained ResNet18 + Grad-CAM constitute the inference engine orchestrated within AIModel.py. |
| ๐ง Machine Learning |
Yes |
Torch transforms + ResNet demonstrate ML-ready normalization, batching, and explainability. |
|
๐ How to expand missing patterns: define clear ports for preprocessing versus inference, emit domain events per stage so other services can react, and inject the AI/ML dependencies through configuration to keep the CLI deterministic alongside experimentation.
|
|
3 |
๐ฉบ
ElectronicHealthRecordTechnicalAssessmentMCG
๐
|
|
|
|
๐ View Repoโ๏ธ |
๐ View Documentation |
|
๐น Project architecture: The deliverable couples an Access/SQL-backed data store
(E:\ElectronicHealthRecordTechnicalAssessmentMCG\ACCDB\EHR_SYSTEM.accdb plus SQL creation scripts/docs)
with a Create React App front-end (E:\ElectronicHealthRecordTechnicalAssessmentMCG\VSCODE\EHRFrontend\ehr-frontend-app)
that consumes REST patterns defined under HTML\Design and HTML\Implementation. The design notes map Hexagonal layers to Backend folders, and the instructions emphasize API operations, audit logging, and deployment-ready scripts for the EHR workflow.
| Layer |
Responsibility |
| ๐ Persistence |
The Access DB and SQL scripts define normalized tables, stored procedures, and audit logs to keep clinical data consistent. |
| ๐ Domain/Ports |
Design artifacts capture Hexagonal ports (EHR API Operations, Backend folder structure) that decouple infrastructure from business rules. |
| ๐ Frontend |
Create React App UI (npm scripts, lint/test/build) visualizes workflow states and consumes documented REST endpoints. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Backend flow separates persistence, services, and controllers (Db scripts, AuditLog services) but needs stricter dependency injection across folders. |
| ๐ท Hexagonal architecture |
Yes |
HTML/Design documentation explicitly maps adapters, ports, and backend folders to Hexagonal layers for clarity. |
| โก Event-driven |
No |
Audit logging occurs via sequential services; to adopt events, publish domain events from AuditLogService to listeners. |
| ๐ค AI |
No |
Introduce ML-assisted triage or predictive analytics alongside the React UI if needed. |
| ๐ง Machine Learning |
No |
Add analytics pipelines that consume Access data into a TensorFlow/PyTorch module to surface insights. |
|
๐ How to apply missing patterns: emit events from AuditLogService for each write, and wrap future AI/ML helpers behind Hexagonal ports so the React UI stays decoupled from experimentation.
|
|
4 |
๐งช
PRUEBA_TECNICA_TRIARIO
๐
|
|
Python |
|
๐ View Repoโ๏ธ |
๐ View Documentation |
|
๐น Project architecture: Versions V001/V002 describe a HubSpot integration pipeline.
Python orchestrates HubSpot API clients (hubspot_project/hubspot) plus data helpers
(core) that read payload files and generate contact/deal records for CRM automation.
Configuration lives under config/settings.py, while CLI scripts (e.g., main.py)
load credentials and orchestrate generators/tests. The deliverables include architectural notes
and PDF explanations for CRM integration and HubSpot compliance.
| Layer |
Responsibility |
| ๐งฑ Core Logic |
Generators, file_operations, and data_structures encapsulate HubSpot data modeling and exports. |
| ๐งญ API Orchestration |
hubspot/* modules centralize contacts/deals/auth plus error handling for CRM payloads. |
| โ๏ธ Configuration |
config/settings.py injects secrets, targets, and HubSpot synchronization parameters for V002. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Modules isolate responsibilities, but dependency injection via constructors/functions would complete OCP and ISP coverage. |
| ๐ท Hexagonal architecture |
No |
Wrap HubSpot clients plus data generators behind ports so you can swap HTTP libraries or data sinks without touching core logic. |
| โก Event-driven |
No |
Standardize on event emission (e.g., ContactCreated) for orchestration instead of sequential API calls. |
| ๐ค AI |
No |
Add AI-assisted data validation or CRM tagging before API deliveries. |
| ๐ง Machine Learning |
No |
Introduce ML models for lead scoring based on imported CRM metrics and integrate with core pipelines. |
|
๐ How to expand the missing patterns: build an adapter layer around HubSpot clients, emit domain events when contacts/deals update, and wrap future AI/ML helpers behind the same ports so the tests stay stable.
|
|
5 |
๐ผ
PruebaTecnicaAmarisConsulting20Julio2024
๐งญ
|
|
|
|
๐ View Repoโ๏ธ |
๐ View Documentation |
|
๐น Project architecture: The FastAPI + DynamoDB backend lives inside
E:\PruebaTecnicaAmarisConsulting20Julio2024\CODIGO\PlataformaFondosFPV\backend\plataforma-fondo-fpv-backend.
main.py wires routes, models.py declares Pydantic/Dynamo models, and serverless.yml
deploys the service to AWS with Dynamo tables defined through the .serverless templates plus BAT scripts
for Athena/Dynamo creation. Node/npm tooling manages dependencies (`package.json`/`package-lock.json`), while PDFs and architectural
notes explain the rationale for CRM automation and infrastructure choices.
| Layer |
Responsibility |
| โ๏ธ API Surface |
FastAPI endpoints in main.py orchestrate requests, input validation, and scheduling. |
| ๐งฑ Domain Models |
models.py captures DynamoDB schemas/DTOs, while serverless.yml defines resources per service. |
| โ๏ธ Infra |
Deploy via Serverless CloudFormation artifacts; BAT scripts provision Athena/Dynamo tables for analytics. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
FastAPI handlers vs. models separate concerns, but service layers could further isolate validation and persistence. |
| ๐ท Hexagonal architecture |
No |
Introduce ports for persistence (Dynamo/athena) and implement adapters so the API logic stays decoupled from infra. |
| โก Event-driven |
No |
Publish events (e.g., TransactionRecorded) to SNS/SQS to trigger further processing without blocking API responses. |
| ๐ค AI |
No |
Add ML-infused prioritization or recommendation features that consult Dynamo data during request processing. |
| ๐ง Machine Learning |
No |
Use historical Dynamo data to train scoring models and expose features via new endpoints. |
|
๐ How to expand missing patterns: wrap persistence/analytics in Hexagonal ports, emit events after writes to decouple workflows, and layer AI/ML helpers behind the same ports so experiments stay testable.
|
|
6 |
โ๏ธ
PruebaTecnicaKLaganSpringBootAngularJunio18_2025
๐งฑ
|
|
Java |
|
๐ View Repoโ๏ธ |
๐ View Documentation |
|
๐น Project architecture: The Spring Boot hexagonal backend sits in
E:\PruebaTecnicaKLaganSpringBootAngularJunio18_2025\JAVA\PruebaTecnicaKLaganManuelaCortesGranados and uses Gradle tooling
plus generated class artifacts. Hexagonal documentation files (like HTML\02_Distribucion Paquetes Arquitectura Hexagonal.html)
describe adapters, ports, DTOs, and domain models. The HTML\index.html captures the Angular flow hitting the hexagonal REST
controllers (WarehouseController, DTOs, WarehouseService) and highlights JWT security via JwtAuthFilter and
WebSecurityConfig.
| Layer |
Responsibility |
| ๐ Web Adapter |
WarehouseController, DTOs, and mapper classes expose REST contracts aligned with the Angular UI. |
| โ๏ธ Domain |
Domain models, use cases, and service layers govern business rules for warehouses/shelves. |
| ๐พ Persistence + Security |
JPA adapters, repositories, DataInitializer, JwtAuthFilter, and WebSecurityConfig secure storage/auth. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Hexagonal layering separates concerns; extending DI/inversion of control between adapters and domain would complete OCP/ISP. |
| ๐ท Hexagonal architecture |
Yes |
Documented package distribution maps interfaces to adapters, matching diagrams in the HTML deliverables. |
| โก Event-driven |
No |
Publish domain events for warehouse/shelf lifecycle so downstream adapters (e.g., analytics) can decouple handling. |
| ๐ค AI |
No |
Introduce a predictive engine that forecasts stock needs, injected via a port so experiments stay optional. |
| ๐ง Machine Learning |
No |
Train models from warehouse data and surface scoring through a dedicated port for the Angular UI. |
|
๐ How to expand missing patterns: emit domain events, wrap AI/ML helpers behind ports, and keep the Angular client decoupled so the core hexagonal services remain testable.
|
|
7 |
๐
PruebaTecnicaMCGNubiralPythonReactOpenAIArgentinaTradeSEP302025
๐ค
|
|
Python |
|
๐ View Repo |
๐ View Documentationโ๏ธ |
|
๐น Project architecture: FastAPI powers the backend located at
E:\PruebaTecnicaMCGNubiralPythonReactOpenAIArgentinaTradeSEP302025\backend\python\BACK_PruebaTecnicaMCGNubiralPythonReactOpenAIArgentinaTradeSEP302025,
wiring CORS middleware, router modules, and the AI + data adapters. OpenAI client logic lives in
app/adapters/ai/openai_adapter.py, persistence sits in app/adapters/db/session.py
and repository implementations, while DTOs, ports, and use cases (e.g., AskImportExportQuestion)
enforce hexagonal separation between API, adapters, and domain. CSV data lives under CSV/ and docs
(like docs/backend_folder_structure.html) record the folder structure plus plotting exports.
| Layer |
Responsibility |
| ๐ API |
FastAPI router exposes /api/v1/import_export, injecting use cases along with CORS middleware for the React front end. |
| ๐ง Domain |
DTOs, ports, and use cases (import/export + ask_question) capture the import/export business rules. |
| ๐ค Adapters |
OpenAI adapter handles prompts; repository adapters manage DB sessions; util helpers seed CSV data for plotting. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Adapters and use cases enforce single-responsibility; expanding providers via interfaces keeps OCP/ISP alive. |
| ๐ท Hexagonal architecture |
Yes |
Clear API/router, domain/use case, adapter layers mirror Hexagonal guidelines documented in the backend folder structure HTML. |
| โก Event-driven |
No |
Add event emission (e.g., ImportExportRequested) so downstream logging or queues can subscribe. |
| ๐ค AI |
Yes |
OpenAIAdapter powers conversational question answering over the imported trade datasets. |
| ๐ง Machine Learning |
No |
Introduce ML scoring modules trained on CSV exports and wire them through ports to keep the CLI deterministic. |
|
๐ How to expand missing patterns: publish domain events, wrap future ML helpers behind the port interfaces, and keep AI calls behind feature toggles so tests remain fast while readiness improves.
|
|
8 |
๐ข
PruebaTecnicaPublicisGlobalDelivery
๐ฆ
|
|
|
|
๐ View Repoโ๏ธ |
๐ View Documentationโ |
|
๐น Project architecture: The backend resides in the IntelliJ folder
E:\PruebaTecnicaPublicisGlobalDelivery\INTELLIJ\ProcesadorPlanilla.
FastGradle scripts manage compilation while generated classes live in the build tree.
Controllers, services, and model layers follow a layered, hexagonal-inspired layout with controller,
service, impl, model, and manager packages, plus specialized interfaces
and DTOs for loan and payroll processing.
| Layer |
Responsibility |
| ๐ Controller |
ProcesaroPlanillaController and ProcessLoanController expose REST endpoints for payroll processing. |
| โ๏ธ Service |
LoanService, ProcesaroPlanillaService, and ProveedorMiembrosPlanillaImpl coordinate domain logic. |
| ๐งฑ Models & Managers |
Empresa/Empleado/Loan models plus ProcesadorPlanillas manager encapsulate payroll rules and persistence guidance. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Interfaces and services isolate responsibilities; further DI across adapters completes OCP/ISP. |
| ๐ท Hexagonal architecture |
Partial |
Packages separate controllers/managers, yet introducing explicit ports and adapters for persistence/processing would strengthen the hexagonal claims. |
| โก Event-driven |
No |
Add event emission when loans are processed or payroll files generated to allow analytics pipelines to subscribe. |
| ๐ค AI |
No |
Introduce AI for anomaly detection on loan data and surface guidance via a dedicated AI adapter. |
| ๐ง Machine Learning |
No |
Train models on processed payroll metrics and expose predictions via new service adapters. |
|
๐ How to expand missing patterns: wrap business logic behind ports/adapters, emit domain events for payroll operations, and inject future AI/ML helpers to keep the core services deterministic.
|
|
9 |
โ๏ธ
PruebaTecnicaVortechGroup
๐งญ
|
|
|
|
๐ View Repoโ๏ธ |
๐ View Documentationโ๏ธ |
|
๐น Project architecture: The IntelliJ Spring Boot project under
E:\PruebaTecnicaVortechGroup\INTELLIJ\SistemaGestionReservaVuelosMCG\Sistema-Gestion-Reserva-Vuelos-MCG
showcases a hexagonal-inspired layout with adapters, infrastructure, controllers, services, and domain models.
Documentation assets (DOCX on Hexagonal/Layered/CQRS/Event-Driven architectures, HTML diagrams in
HTML\General, and use-case walkthroughs under HTML\Casos_Uso) narrate the decision to place Gradle-built controllers,
DTOs, and mappers around register/availability operations for flights, seating, and passenger data.
The repository also records Postgres SQL scripts and exported PDF/ZIP deliverables to support the runway scenario.
| Layer |
Responsibility |
| ๐ Web/API Adapter |
Controllers expose REST endpoints (ProcesaroPlanillaController, ProcessLoanController) for loan/payroll/flight management. |
| ๐ง Domain & Application |
Services, managers, models, and use cases (RegisterAvionUseCase, LoanService) enforce business rules for reservations and availability. |
| ๐พ Infra & Config |
DataSourceConfigProperties, persistence adapters, and SQL scripts define table schemas (MySQL/Postgres) while Gradle executes builds and tests. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Interfaces (ProveedorMiembrosPlanilla, services) separate behaviors, though more explicit dependency injection would further reinforce OCP/ISP. |
| ๐ท Hexagonal |
Yes |
Hexagonal documentation (DOCX/HTML) and package distribution implement adapters/ports within the application/infrastructure structure. |
| โก Event-driven |
Yes |
Event-Driven Architecture doc plus CQRS artifacts describe how commands/events flow through the reservation modules. |
| ๐ค AI |
No |
AI-assisted demand forecasting could be layered later via a dedicated adapter that consumes event streams or SQL exports. |
| ๐ง Machine Learning |
No |
Historic flight/reservation data can seed ML models; expose them through ports so the heuristics remain optional. |
|
๐ How to expand missing capabilities: wrap AI/ML helpers behind documented adapters, emit domain events per flight operation for analytics, and keep the core hexagonal services decoupled via defined ports.
|
|
10 |
๐ฑ
SocialGoodSoftwareTechnicalAssessment
๐ค
|
|
|
|
๐ View Repoโ |
๐ View Documentationโ |
|
๐น Project architecture: This assessment (whose deliverable is summarized by
E:\SocialGoodSoftwareTechnicalAssessment\DOC_PDF\Letter to Recruiter - Proposal.pdf)
outlines the social-impact initiative and how the proposed software would align to the nonprofit goals.
While no code repository is linked, the materials stress planning, stakeholder communication, and ethical
data considerations that parallel typical hexagonal/CSR approaches.
| Layer |
Responsibility |
| ๐งญ Strategy |
Proposal letter frames the initiative, audience, and intended social good alignment. |
| ๐ฃ Communication |
Documentation communicates impact, governance, privacy and ethical constraints for community partners. |
| โ๏ธ Planning |
Guidance hints at cross-functional delivery by mapping social good features to technical execution. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
NA |
No code shared; focus is on narrative architecture of the social impact proposal. |
| ๐ท Hexagonal |
NA |
Future implementation could wrap business rules in ports/adapters aligned with documented objectives. |
| โก Event-driven |
NA |
Plan emphasizes stakeholder feedback loops; an event-driven process could model community input streams. |
| ๐ค AI |
NA |
The proposal invites responsible technology; future AI helpers should embed ethical safeguards referenced in the letter. |
| ๐ง Machine Learning |
NA |
ML could later power personalization for beneficiaries; keep this behind documented ethical controls. |
|
๐ Next steps: capture the proposed architecture inside hexagonal ports, record CORS events for community responses, and ensure any AI pilots stay compliant with the documented recruiter letter.
|
|
11 |
๐ค
TA_20250516_AI_ECOMMERCE_ASSISTANT
๐๏ธ
|
|
Python |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ |
|
๐น Project architecture: ShopBot lives under E:\TA_20250516_AI_ECOMMERCE_ASSISTANT\ai-ecommerce-assistant. FastAPI-style flow is replaced by a CLI loop around OpenAI GPT-4o chat completions, with main.py acting as the orchestrator that wires environment-protected API keys, the OpenAI client, and the function schema (functions list). Business logic is split into assistant_functions.py helper methods (load catalog, get product info, check stock) referencing the static product_catalog.json, while README and requirements lock the AI dependencies.
| Layer |
Responsibility |
| ๐ง AI Orchestration |
main.py loops on user input, handles system/user messages, and calls OpenAI with the declared function schema. |
| ๐งฉ Domain Helpers |
assistant_functions.py loads the JSON catalog and encapsulates SRP-compliant lookups/checks for products. |
| ๐ฆ Data Assets |
product_catalog.json plus README detail the available SKUs and environment setup for the AI assistant. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Helper functions each do one thing (load catalog, fetch info, check stock) while the orchestrator remains extensible. |
| ๐ท Hexagonal |
Partial |
Function-call schema isolates the AI contract, but formally introducing ports/adapters (e.g., catalog provider) would clarify separation. |
| โก Event-driven |
No |
Could emit events (ProductQueried/StockChecked) when responding, allowing analytics hooks. |
| ๐ค AI |
Yes |
Uses OpenAI GPT-4o with function calling for natural language responses and routed business logic. |
| ๐ง Machine Learning |
No |
Opportunity to train ML heuristics on user queries/product success metrics stored in CSV/JSON exports. |
|
๐ Next steps: formalize the catalog access behind a port, emit structured events for query/stock hits, and keep GPT/ML experiments behind feature toggles so the assistant stays testable.
|
|
12 |
๐งฑ
TechAssessmentMCGIIHH28SEP2025Python
๐ง
|
|
Python |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ |
|
๐น Project architecture: Inventory management splits between the FastAPI/SQL backend at
E:\TechAssessmentMCGIIHH28SEP2025Python\backend\python\inventory_management_backend and a React frontend
(frontend\react\inventory-management-system-ihh-react). README and requirements document deployments while migrations and sample data ensure completeness.
| Layer |
Responsibility |
| ๐ Backend |
FastAPI services, data migrations, and requirements.txt orchestrate inventory CRUD, migrations, and API contracts. |
| ๐ฅ Frontend |
React SPA consumes backend endpoints to show stock, orders, and dashboards. |
| ๐ Docs |
Frontend/Backend READMEs and data files provide reproducible contexts. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Modules isolate catalog/stock logic; expanding DI would reinforce OCP/ISP. |
| ๐ท Hexagonal |
Partial |
Docs describe adapters; formalized ports for data sources would complete the hexagonal intent. |
| โก Event-driven |
No |
Consider emitting inventory change events for analytics. |
| ๐ค AI |
No |
Future AI helpers could forecast restock needs using catalog data. |
| ๐ง Machine Learning |
No |
Train ML scoring from exported transaction files behind ports. |
|
๐ Next steps: document adapter boundaries, add domain events, and keep experiments behind feature toggles to sustain repeatable tests.
|
|
13 |
๐งฉ
TECHNICAL_FULL_STACK_MILLION
๐
|
|
C# |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ |
|
๐น Project architecture: The repo combines docs (PDF/DOCX/HTML) and several .NET prototypes under E:\TECHNICAL_FULL_STACK_MILLION.
The DOT_NET folder hosts multiple POCs (HelloWorldREST, RealStateCompanyProject API, mass-data utilities) alongside MongoDB sample data, Swagger, and Serverless artifacts. Each build (Gradle, dotnet CLI) produces bin/obj artifacts referenced by docs, illustrating layering from controllers/services to infra/adapters.
| Layer |
Responsibility |
| ๐ API |
Controllers, Swagger endpoints, and Function/worker bootstraps expose REST/contracts for Azure/Mongo services. |
| ๐ง Domain/Services |
Services, managers, filters, and DTOs encapsulate business logic (loan, property, flight management) with tests. |
| ๐พ Infra/Data |
MongoDB backups, SQL scripts, sample data utilities, and deployment configs (Docker, buildspec, gradle/func settings) support runtime. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Interfaces/services split behaviors; docs show separation between adapters, services, and controllers. |
| ๐ท Hexagonal |
Partial |
Package distributions (docs) articulate adapters and repositories; more formal ports would polish the architecture. |
| โก Event-driven |
Yes |
Event-Sourcing/CQRS docs describe how messages traverse the system, and Function/Worker projects emphasize asynchronous workers. |
| ๐ค AI |
No |
Future AI enhancements could leverage the Mongo data to provide analytics via new services. |
| ๐ง Machine Learning |
No |
Sample datasets (Mongo JSON) could train ML scoring; tie them through ports for optional experiments. |
|
๐ Next steps: formalize adapters for Mongo/SQL, emit events for data operations, and wrap AI/ML helpers behind ports to keep the core dotnet services stable.
|
|
14 |
โ๏ธ
TechAssessmentLDSChurchMemberServiceManagerSystemOCT122025
๐ ๏ธ
|
|
Java |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ๏ธ |
|
๐น Project architecture: Backend resides at E:\TechAssessmentLDSChurchMemberServiceManagerSystemOCT122025\LDSChurchMemberServiceBackend, with Gradle/IntelliJ configs, Docker, and AWS deployment assets (buildspec, template.yaml).
Docs (aws_sdk_usages.html, sw_architecture_folder_structure_backend.txt) highlight serverless architecture, Lambda functions, and Spring Boot patterns for concatenation services supporting church member workflows.
| Layer |
Responsibility |
| ๐ API |
Lambda handlers/controllers expose concatenation REST operations for member services. |
| ๐ง Domain |
Services and DTOs manage concatenating messages, security, and service orchestration per the LDS naming. |
| โ๏ธ Deployment |
Dockerfile, template.yaml, and buildspec articulate ECS/Lambda deployments and AWS SDK usage. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Services isolate responsibilities; explicit dependency injection enhances OCP/ISP. |
| ๐ท Hexagonal |
Partial |
Docs describe folder structure; wrap service interfaces behind ports to clarify adapters. |
| โก Event-driven |
Yes |
Lambda handlers plus AWS SDK usage highlight asynchronous message patterns. |
| ๐ค AI |
No |
Opportunity to include AI-driven member suggestions inside service handlers. |
| ๐ง Machine Learning |
No |
Add ML plugged behind ports to analyze engagement data while keeping the lambda deploys stable. |
|
๐ Next steps: lock adapter boundaries, document AWS resources, and keep AI/ML proofs behind ports so the deterministic lambda stack remains predictable.
|
|
15 |
๐ง
PruebaTecnicaMCGZumTechPythonReactChatBot16Oct2025
๐ค
|
|
Python |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ๏ธ |
|
๐น Project architecture: The Python-React ChatBot pairs backend files and notes under
E:\PruebaTecnicaMCGZumTechPythonReactChatBot16Oct2025. Sample docs detail requirement alignment, candidate arguments,
and backend directory structure, while the virtual environment and dataset assets support a GPT-powered conversational flow.
The stack ensures modular compliance by documenting how Python services and React components collaborate on the ZumTech chatbot.
| Layer |
Responsibility |
| ๐ง AI Backend |
Python helpers nestled inside the venv orchestrate GPT function calls and catalog lookups described in the docs. |
| ๐ฅ๏ธ Frontend |
React chat UI (noted in the note cards) relays natural language input to the assistant back end. |
| ๐ Documentation |
Index and compliance PDFs detail directories, requirement tracking, and ethical alignments for the chatbot. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Helper modules focus on single responsibilities (assistant logic, dataset loading) while main orchestrates GPT calls. |
| ๐ท Hexagonal |
Partial |
Docs show folder structure; future adapters could wrap data sources/API calls to complete the hexagonal vision. |
| โก Event-driven |
No |
Instrument query/response events when ChatBot calls succeed so analytics can observe usage flows. |
| ๐ค AI |
Yes |
GPT-4o-based chatbot with function calling drives the user experience. |
| ๐ง Machine Learning |
No |
Add ML experiments on conversation logs while keeping them behind ports for stability. |
|
๐ Next steps: keep adapter boundaries between React and Python, emit structured events for dialog traces, and gate ML pilots so the assistant remains compliant.
|
|
16 |
๐
PruebaTecnicaMCGZumTechSalesForce16Oct2025
๐ผ
|
|
|
|
๐ View Repo |
๐ View Documentation |
|
๐น Project architecture: The ZumTech Salesforce take-home (documents and plan) sits under
E:\PruebaTecnicaMCGZumTechSalesForce16Oct2025. Step-by-step guides, annotated screenshots, and planning PDFs capture how the solution would integrate Salesforce with Python/React components while clarifying compliance expectations and UI flows.
| Layer |
Responsibility |
| ๐ Documentation |
Plan documents explain Salesforce integration, coverage, and compliance for the chatbot experience. |
| ๐ง Proposal |
Step-by-step & plan files outline how Python/React arms integrate with Salesforce data/flows. |
| ๐ Screenshots |
Plan includes referenced screenshots capturing UI/UX for Salesforce features. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
N/A |
Focus is on documenting the architecture; implementation can align later with single-responsibility modules. |
| ๐ท Hexagonal |
N/A |
Documentation outlines ports/adapters infused by Salesforce screens; code ports would follow from those notes. |
| โก Event-driven |
N/A |
Future event-driven flows could broadcast Salesforce data updates to downstream chatbots. |
| ๐ค AI |
Yes |
Strategy centers on integrating Salesforce context with AI/ChatBot flows to surface relevant data. |
| ๐ง Machine Learning |
No |
Recommend using logging from Salesforce interactions to train models in a future phase. |
|
๐ Next steps: capture the documented flows as ports/adapters, emit events from Salesforce syncs, and gate any AI/ML experiments with compliance notes.
|
|
17 |
๐ง
PruebaTecnicaMCGReFacilNodeJS17Oct2025
๐งฑ
|
|
Node JS |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ๏ธ |
|
๐น Project architecture: Node.js backend packaged under
E:\PruebaTecnicaMCGReFacilNodeJS17Oct2025 (zip artifact contains the server stack).
Documentation and deployment material reference how the ReFacil service uses Node/Express and zipped deliverables for rapid deployment.
| Layer |
Responsibility |
| ๐ ๏ธ Server |
Node/Express handlers, zipped inside PTMCGReFacilNodeJS17Oct2025BackendNodeJS.zip. |
| ๐ Architecture Docs |
Shared documentation outlines how the NodeJS service should integrate with other modules. |
| ๐ฆ Packaging |
Zip artifact includes all backend code ready to deploy, capturing Node dependencies. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Modular Node handlers share responsibilities but future DI would clarify OCP. |
| ๐ท Hexagonal |
Partial |
Zip packaging hints at adapters; defining clear ports for data/commands would solidify the pattern. |
| โก Event-driven |
No |
Consider emitting Node events on actions for logging or service chaining. |
| ๐ค AI |
No |
AI/ML guards could be layered later via adapters feeding the zipped service. |
| ๐ง Machine Learning |
No |
Future ML modules can consume Node events/data exported from the backend. |
|
๐ Next steps: document the zipped backend as a port, add event hooks, and keep ML pilots behind stable adapters.
|
|
18 |
๐งฑ
PruebaTecnicaMCGDMSSoftwareAngularDotNET15OCT2025
โ๏ธ
|
|
C# |
|
๐ View Repoโ๏ธ |
๐ View Documentationโ๏ธ |
|
๐น Project architecture: This DMS solution combines a .NET backend and Angular frontend under
E:\PruebaTecnicaMCGDMSSoftwareAngularDotNET15OCT2025. The backendโs Program.cs, controllers, and services focus on Recuerdos (โMemoriesโ) features, while the Angular app references RxJS/Angular CLI modules (node_modules) for UI flows.
Swagger, JWT, and SQL dependencies live in the associated bin/obj directories, and docs describe how Auth, Lugares, and Personas controllers tie to the Angular SPA.
| Layer |
Responsibility |
| ๐ API + Security |
Controllers (Auth, Persona, Recuerdos, etc.) expose secure endpoints with JWT and Swagger support. |
| โ๏ธ Services |
Service layer and models orchestrate DMS operations over SQL, while DataSourceConfig ensures connectivity. |
| ๐ฅ๏ธ Angular UI |
MCG DMS Angular app couples to the backend via HTTP services leveraging RxJS observables. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Services/controllers split responsibilities with DI-ready patterns described in Program.cs and service classes. |
| ๐ท Hexagonal |
Partial |
Docs detail folder structure (backend vs Angular) but formal ports/adapters would complete the hexagonal claim. |
| โก Event-driven |
No |
Introduce domain events for Recuerdos write operations to trigger downstream analytics. |
| ๐ค AI |
No |
Optional AI could summarize member histories before sending them through the Angular UI. |
| ๐ง Machine Learning |
No |
Train ML models on recounted member interactions and serve predictions via ports. |
|
๐ Next steps: lock down adapter boundaries, emit domain events for service changes, and keep AI/ML trials behind guarded ports for stability.
|
|
19 |
๐ง
AISystemsFireFliesCRMAutomationTechAssessment
๐ธ๏ธ
|
|
Python-React |
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
๐น Project architecture: The solution mixes a Gradle-powered Java backend (backend folder) at
E:\AISystemsFireFliesCRMAutomationTechAssessment\backend\AISystemsFireFliesCRMAutomationTechAssessmentBackend
with a Vite/React front end (frontend\aisystems-fireflies-crm-automation-reactjs). Backend bin/gradle artifacts, Swagger/ UI controllers, and AWS-ready runners orchestrate CRM automation, while the React app (src/main.tsx) consumes the Java APIs. Docs and README highlight architecture, directory structure, and compliance with HubSpot/OpenAI connectors.
| Layer |
Responsibility |
| ๐งโ๐ป Backend |
Java services, Swagger controllers, and runners expose HubSpot/OpenAI automation flows described in HELP.md. |
| ๐จ Frontend |
React/Vite app loads under frontend, provides dashboards, and interacts via API/Socket events. |
| ๐ Docs |
Argumentation and index HTML detail requirements, directory structure, and automation goals. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Java services vs. UI share responsibilities; docs champion SRP and modularity for automation features. |
| ๐ท Hexagonal |
Partial |
Docs describe adapters and runner layers; wrapping CRM/OpenAI integrations behind ports would close the loop. |
| โก Event-driven |
Yes |
Backend runners and Swagger controllers release tasks/events to HubSpot/OpenAI services. |
| ๐ค AI |
Yes |
OpenAI connectors in backend, plus doc references to AI role, power the CRM automation assistant. |
| ๐ง Machine Learning |
No |
Future ML scoring modules could analyze CRM event streams; keep them modular via ports. |
|
๐ Next steps: codify adapter boundaries for HubSpot/OpenAI, emit structured operation events, and keep AI/ML helpers behind feature toggles for compliance.
|
|
20 |
RetoTecnicoSofka25Diciembre2025 |
Hexagonal |
Java |
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
RE_ Process follow-up - Sofka - Communication issues from Coca regarding meeting extension |
โจ Project architecture: The repo lives under E:\RetoTecnicoSofka25Diciembre2025\BACKEND\JAVA\RetoTecnicoSofka25Diciembre2025Backend and starts with RetoTecnicoSofka25Diciembre2025Application, wiring the adapter/application/domain packages whose roles the docs/index.html guidance documents, so this Spring Boot service reads as a CQRS-ready hexagonal microservice that can publish to Lambda/Swagger runners while boasting Docker-ready scripts. adapter.rest exposes the clientes, cuentas, movimientos, and personas APIs with DTO validation and Swagger metadata, while the read/write persistence adapters isolate dedicated JPA repositories plus the delivered BaseDatos.sql schema (matching the relational requirements) so the persistence boundary stays audited, extendable, and aligned with the Postman/exception-handling expectations. Domain events such as PersonCreatedEvent and MovementCreatedEvent propagate through adapter.events (Kafka publishers, SNS/SQS adapters, and the EventDeserializer) to push asynchronous flows without bleeding into services, while infrastructure.metrics and infrastructure.tracing keep observability coherent and the springboot-lambda/template.yaml shows how the same layers can deploy serverlessly. The application.command/application.query orchestrators honor explicit ports, so controllers never reach for repositories directly, leaving adapters easily mockable and the architecture faithful to the documented instructions.
| Layer |
Responsibility |
| ๐งฑ API Layer |
adapter.rest delivers Persona/Cliente/Cuenta/Movimiento endpoints plus Swagger docs, feeding validated commands into the application services. |
| โ๏ธ Core Domain |
Domain models, services, commands, queries, and ports (command/query/service directories) encapsulate business rules, event emission, and exception handling. |
| ๐ก Events & Infrastructure |
Event publishers/adapters (Kafka, SNS/SQS) plus metrics/tracing modules keep async flows observable, while persistence adapters persist movements and events through dedicated JPA repos. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Services, commands, and controllers respect SRP/ISP while DI-ready ports are defined; tightening constructor injection and OCP-friendly registries would seal the contract. |
| ๐ Hexagonal architecture |
Yes |
Clear adapter/application/domain folder split with ports keeps infrastructure isolated and testable. |
| ๐ Event-driven |
Yes |
Domain events publish through Kafka/SNS/SQS adapters, enabling asynchronous messaging for persona/movement workflows. |
| ๐ค AI |
No |
Score data could flow through a lightweight predictor (OpenAI/ML) before movement persistence to provide insights to the report endpoints. |
| ๐ง Machine Learning |
No |
Training an ML model on movement histories and injecting it via a port would power predictive alerts (e.g., fraud detection) before saving transactions. |
|
๐ How to adopt the missing patterns: wrap movement reporting/history in feature-flagged AI/ML helpers, deliver richer predictions through new ports alternating between Kaggle/TensorFlow models, and keep the hexagonal ports the only layer that knows about those augmented signals.
|
Good practices: document the schema/requirements in docs/index.html, keep BaseDatos.sql in sync with the JPA models, centralize exception messages, and guard each async publisher with configuration so the Cloud/Docker deployments stay deterministic.
|
21 |
PruebaTecnicaXpertgroupIngAWS26Dic2025 |
Hexagonal |
Python |
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
โจ Project architecture: The codebase under E:\PruebaTecnicaXpertgroupIngAWS26Dic2025\python\PruebaTecnicaXpertgroupIngAWS26Dic2025Python orchestrates clean data pipelines that start from the canonical dataset_hospital 2 AWS.json, channel through the scripts/ helpers (completeness, text normalization, cancellations, doctor KPIs, demand forecast, ETL validation, etc.), and emit HTML artifacts inside reports/ plus the seven documented use cases in docs/solution-details.html. Ingestion adapters live in src/adapters (JSON appointment/patient readers, persistence reporters, and city-category imputers), while the business services in src/core/services expose SOLID-friendly routines such as CancellationRiskService, DemandForecastService, ExecutiveDiscrepancyService, and a telemetry-minded ETLPipelineService. These services communicate exclusively through ports defined in src/core/ports.py so each script can swap repositories or datasets before handing off to tests/ where pytest policies guard referential integrity and request-level expectations.
| Layer |
Responsibility |
| ๐งญ Entry Scripts |
Each scripts/*.py file parses AWS-flavored JSON exports, normalizes text/dates, and wires inputs into the ports so the pipeline remains deterministic. |
| โ๏ธ Core Services |
Heavier orchestration lives in src/core/services; these capture features (occupancy dashboards, cancellation risk scoring, demand forecasts, doctor notifications) while depending only on abstractions described in ports.py. |
| ๐ Reports + Docs |
Outputs land in reports/ (summaries, dashboards, metrics), and docs/solution-details.html plus usecases.html hold the stakeholder narratives referenced by the GitHub pages site. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Domain services explicitly call ports/injection-friendly repos, and each module (CancellationRisk, DemandForecast, ETLPipeline, etc.) keeps one responsibility. |
| ๐ Hexagonal architecture |
Yes |
Adapters (ingestion, persistence, imputers) sit outside core, and ports.py defines the contract so infrastructure switches (new datasets, cloud connectors) stay isolated. |
| ๐ Event-driven |
No |
Scripts trigger services linearly; converting the report emitters to pub/sub (e.g., AWS SNS + EventBridge) would allow downstream dashboards to react. |
| ๐ค AI |
Partial |
Heuristic scoring (specialty weights + decay) already approximates intelligence; feeding the same pipelines through an OpenAI/RL policy before generating insights could boost the AI observability. |
| ๐ง Machine Learning |
Partial |
Cancellation and demand forecasts rely on deterministic math today; wrapping them in scikit-learn estimators with saved artifacts would fulfill a true ML story. |
|
รฐลธโห How to extend the missing patterns: stream each script through a lightweight EventBridge event bus, capture report-ready payloads as events, and reserve AI/ML helpers (Fine-tuned Llama + scikit-learn pipelines) inside new ports so the core services stay deterministic while the predictions become pluggable.
|
Good practices: keep docs/usecases.html in sync with the repo, version the dataset, run pytest tests before publishing, document commands inside useful_commands.txt, and guard each report generator with configuration-driven toggles so the AWS-targeted deployment stays reproducible.
|
22 |
SmartMarketingsHubSeniorAIEngineerTechAssessment |
|
Python |
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
โจ Project architecture: The experience points to the `SmartMarketingsHubSeniorAIEngineerTechAssessment` folder, where the dataset resides inside `SmartMarketingsHubSeniorAIEngineerTechAssessmentPython/csv/border_crossing_entry_data.csv` and a MySQL script (`docs/border_port_activity.sql`) plus the helper `generate_border_sql.py` keeps schema alignment with the US Border Crossing Port Activity dataset. Training happens in `train_llama_border_qlora.py` which loads the border table, applies 4-bit quantized QLoRA adapters on Llama-3-8B, logs loss/EM/F1 metrics, and emits adapters that the FastAPI inference API (`app.py` uncovered in the docs) consumes to answer `/ask` queries. Every section described in `docs/index.html` (AI/ML engineering, architecture challenge, blockchain engineering, backend/GraphQL, cloud/devops, frontend, final system design) justifies each layer that now sits behind documented guardrails, streaming components, and AWS-ready deployment guidance (EKS autoscaling, CloudFront + API Gateway, DynamoDB/RDS + SQS/SNS). The project ensures the dataset, documentation, and scripts are synchronized by referencing `useful_commands.txt` while the recorded sample rows highlight how `Point` is stored as spatial coordinates for further vectorization or RAG ingestion.
| Layer |
Responsibility |
| ๐๏ธ Data & Ingestion |
`csv/` files keep the border dataset, `docs/generate_border_sql.py` syncs the MySQL schema, and `docs/border_port_activity.sql` enables relational reporting + QA pipelines. |
| ๐ค Model Training |
`train_llama_border_qlora.py` loads Llama-3-8B in 4-bit mode, applies PEFT adapters, and outputs fine-tuned LoRA weights plus QLoRA metrics referecing EM/F1/ROUGE-L. |
| โก Inference API |
FastAPI `app.py` (`docs/index.html` describes it) wires retrieval (FAISS + embeddings), re-ranker, generation, and guardrails into a single `/ask` endpoint with SSE/WebSocket intentions. |
| โ๏ธ Cloud/Operations |
Section 5 enumerates AWS ECS/EKS GPU autoscaling, SQS/SNS event shielding, API Gateway, CloudWatch + Grafana observability, and Terraform/CDK IaC for guardrails. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Training, inference, and docs each live in dedicated modules; the dataset, scripts, and docs do not cross-responsibility and APIs depend on abstractions described in the FastAPI router. |
| ๐ Hexagonal architecture |
Partial |
Adapters are implied (FAISS retriever, LoRA fine-tuning) but explicit ports/adapters are still being formalized; defining interfaces between FAISS, tokenizer, and API would close the gap. |
| ๐ Event-driven |
Partial |
Sections mention Kafka streaming and SNS/SQS for final system design, yet script orchestration flows synchronously; emitting SSE/WebSocket events or pushing QA metrics to SNS would complete the experience. |
| ๐ค AI |
Yes |
QLoRA training, FAISS retrieval, and FastAPI inference with guardrails/analytics deliver AI-native capabilities (EM/F1/ROUGE-L evaluation present). |
| ๐ง Machine Learning |
Yes |
Border dataset-based scoring, demand prediction, and status dashboards already follow ML workflows, and the dataset can feed monitoring or fairness pipelines. |
|
๐ How to adopt the missing patterns: convert FAISS/RAG emitters into EventBridge/SNS messages, declare explicit port interfaces between FastAPI and FAISS connectors, and guard streaming + blockchain notifications with toggleable adapters so Hexagonal intent stays testable while AI/ML insights keep evolving.
|
Good practices: keep `docs/index.html` synchronized with the folder structure, document commands in `useful_commands.txt`, version the dataset (csv + SQL), and run FastAPI/PyTorch unit tests before deploying the AWS/EKS stack mentioned in the cloud section.
|
23 |
AiMlGenerativePlatform |
|
Python |
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
โจ Project architecture: `GenaiDataLakeQuickstart` reproduces the FY2024 FRPP Public Dataset guidance from `docs/index.html` along with the requirement-specific narratives (`docs/us-req-01.html`, `docs/technical-assessment-response.html`) and glossary/reference pages; it highlights ingesting the Excel/CSV exports into S3, cataloging with Glue, querying curated Athena views, and offering a lightweight README-friendly API to summarize FRPP assets plus an optional Bedrock/LLM narrative (REQ-ARC-05/06). The backend (`backend/datalake-api`) is a Gradle Spring Boot service whose `DatalakeApiApplication` spins up Swagger logging, exposes a simple `/concat` REST endpoint (see `controller/ConcatController`), and keeps requirements organized in the `requirements/` packages so the service can be extended with more endpoints, metadata docs, or an OpenSearch/Bedrock connector without touching the core domain.
| Layer |
Responsibility |
| ๐๏ธ Data Catalog |
Requirement docs drive S3 + Glue ingestion, Athena views, and metadata publishing so analysts always know what columns (e.g., FRPP fields list) exist and how licensing/licensed data is exposed. |
| โ๏ธ API Layer |
`DatalakeApiApplication` + `ConcatController` show how the service exposes REST summaries; future endpoints can add Athena/Glue wrappers or call OpenSearch/Bedrock connectors while the Gradle project stays lean. |
| ๐ Documentation |
Docs (glossary, technical assessment response, requirements) record the tradeoffs, dataset expectations, and architecture decisions so reviewers can trace asset publishing details and compliance choices. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Documentation, ingestion guidance, and the single purpose controller keep responsibilities discrete; future Glue/Athena helpers can be injected via configuration. |
| ๐ Hexagonal architecture |
Partial |
Requirements packages isolate domain rules but there is no formal ports/adapters yet; defining interfaces for Glue, Athena, and Bedrock clients would solidify the boundary. |
| ๐ Event-driven |
No |
The Gradle service responds to REST calls only; emitting SNS/Kafka events when new datasets land would enable downstream telemetry and ingestion monitoring. |
| ๐ค AI |
Partial |
REQ-ARC-05 encourages optional Bedrock/LLM summaries of Athena results; wiring a Bedrock client alongside Athena queries would deliver the narrative layer. |
| ๐ง Machine Learning |
No |
The focus is on data lake readiness and documentation; adding predictive models over FRPP assets or embedding clustering pipelines would fulfill an ML story. |
|
รฐลธโห How to adopt the missing patterns: create Glue/Athena ports/adapters, emit dataset-ready events when S3/Glue work finishes, and introduce Bedrock + ML scoring clients behind new interfaces so the API remains stable while advanced analytics plug in.
|
Good practices: map FRPP field lists inside documentation, keep `docs/technical-assessment-response.html` aligned with implementation notes, log dataset metadata (license, updates), and guard any Bedrock/LLM calls with configuration so the public data lake story stays reproducible.
|
โจ Project architecture: `AiMlGenerativePlatform` narrates an end-to-end ML + generative platform built around the RIPS dataset described in `docs/prueba_tecnica_ml_rips.html`. The repo keeps the glossary (`docs/glosario-conceptos-ml-genai.html`), problem statement (`docs/enunciado-problema.html`), implementation strategy (`docs/estrategia-implementacion.html`), pipeline guidance (`docs/pipelines-reproducibles.html`), and step-by-step log (`docs/paso_a_paso.html`) as the canonical source of truth. Data lives inside `docs/border_crossing_entry_data.rar` plus the SQL script (`docs/border_port_activity.sql`) that models the health encounters; ETL pipelines chunk, embed, and version data to feed the predictive models and the GenAI assistant. Sectioned instructions (ML part, Generative part, API & architecture, MLOps) align with Python scripts that train GLM/Poisson & boosting models; build SHAP-backed explainability; create a FastAPI/REST + GraphQL serving layer; and finally orchestrate RAG/LLM (PyTorch, embeddings, FAISS plus mention of guardrails, EM/F1/ROUGE) with deployment guidance for AWS and streaming events.
| Layer |
Responsibility |
| ๐งพ Data & Feature Layer |
`docs/border_port_activity.sql`, `docs/pipelines-reproducibles.html`, and the archived dataset ensure schema-driven ingestion, feature engineering, and reproducibility notes for demographics, diagnostics, and aggregated attentions. |
| ๐ง ML Modeling |
`prueba_tecnica_ml_rips.html` spells out GLM/Poisson baselines plus boosted alternatives, SHAP explainability, evaluation (MAE/RMSE/F1), and logging of metrics per specialty, so models stay explainable. |
| ๐ช Generative & RAG |
The GenAI assistant pipeline (RAG, chunking, embeddings, FAISS, guardrails) described across the GenAI docs becomes a FastAPI + embedded LLM flow supporting QA while citing evidence. |
| โ๏ธ API + MLOps |
Section 5 and 6 emphasize FastAPI endpoints, GraphQL schema, JWT guards, automated tests, Docker/Terraform deployments, monitoring, drift detection, and re-training plans so the ML/GenAI stack can operate in AWS/EKS with CI/CD. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Docs, scripts, and dataset responsibilities are cleanly separated (data ingestion, modeling, GenAI inference) and each module follows single responsibility with DI-friendly configuration noted in the implementation strategy. |
| ๐ Hexagonal architecture |
Partial |
The documentation hints at clear borders between data, service, and API layers but formal ports/adapters (e.g., between FastAPI and FAISS/LLM) could be codified to keep infrastructure swappable. |
| ๐ Event-driven |
Partial |
Sections speak of Kafka streaming, guardrails, SSE/WebSocket, and SNS/SQS for real-time border analytics, yet the current scripts run sequentially; emitting events when the GenAI assistant answers or models retrain would fulfill the vision. |
| ๐ค AI |
Yes |
GenAI tests, guardrails, EM/F1/ROUGE-L evaluation, and RAG question answering are core deliverables; the documentation even includes QA metrics for generative responses. |
| ๐ง Machine Learning |
Yes |
Predictive modeling, SHAP explanations, ETL pipelines, drift monitoring, and re-training plans are the emphasized ML story (35% ML + 40% Generative weight in evaluation tables). |
|
รฐลธโห How to expand the missing patterns: codify ports/adapters for FAISS/LLM connectors, emit Kafka/SNS events for predictions+QA, and wrap those adapters in feature flags so future data/model replacements stay hexagonal without destabilizing AI/ML guardrails.
|
Good practices: keep each doc page synchronized with the repo, version the RIPS dataset and SQL, cite guardrails/metrics in `docs/paso_a_paso.html`, publish QA artifacts per `docs/prueba_tecnica_ml_rips.html`, and run the documented API/ML tests before any AWS deployment.
|
24 |
GenaiDataLakeQuickstart |
|
|
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
25 |
PruebaTecnicaDesarrolladorAzureKibernum19Ene2026 |
|
|
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
โจ Project architecture: The Azure/MuleSoft technical assessment focuses on the hybrid API/Integration scenario documented inside E:\PruebaTecnicaDesarrolladorAzureKibernum19Ene2026\docs (index, punto0โpunto7). The landing page highlights the Hybrid API portal, Azure Service Bus/Logic Apps connectivity, API Gateway policies, CI/CD automation for Mule apps, and real-time monitoring via Azure Monitor/Dynatrace. Each punto page (0โ7) captures the lifecycle (requirements management, inputs, integration proposal, deliverables, implementation steps, deployment needs, and operational handover) so the narrative explains how MuleSoft APIs on-prem/SaaS interplay with Azure Gateway/Service Bus, how security/auth policies are enforced, and why Application Insights dashboards plus service-level telemetry keep the integration traceable.
| Layer |
Responsibility |
| ๐ Requirements & Design |
Punto0โPunto3 document the lifecycle, integration assumptions, interfaces, and design deliverables, keeping stakeholders aligned on Mule/API and Azure interactions. |
| ๐ง Implementation & Automation |
Punto4โPunto6 map implementation inputs, CI/CD pipelines in Azure DevOps, MuleSoft MUnit tests, Service Bus/Logic App workflows, and deployment constraints for multi-environment rollouts. |
| ๐ฆ Operations |
Punto7 plus the scenario emphasise Azure Monitor, Application Insights, Dynatrace, and documented operating procedures so runbooks tie telemetry to resilient APIs. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Docs separate requirements, design, implementation, and operation; emphasise Mule/Logic App scripts per responsibility but could tighten dependency inversion inside the Mule/DevOps pipelines. |
| ๐ Hexagonal architecture |
Partial |
Hybrid APIs, Azure gateway, and Service Bus are described as adapters; writing explicit ports between Mule flows and Azure Event Bus + Logic Apps would close the loop. |
| ๐ Event-driven |
Yes |
Azure Service Bus triggers plus Logic Apps workflows push the cyclic messaging model, supporting asynchronous cancel/modify operations. |
| ๐ค AI |
No |
The focus stays on integration and operations; optional Azure OpenAI or Bot Service assistants could add conversational breakdowns on the portal. |
| ๐ง Machine Learning |
No |
No ML pipelines are documented; adding forecast or anomaly detection models around usage/cancellation patterns would fulfil this layer. |
|
๐ How to adopt missing patterns: formalize ports/adapters between Mule flows and Azure connectors, emit Service Bus/SNS events for telemetry, and wrap optional Azure AI/ML helpers behind feature flags so the hybrid portal remains deterministic while gaining predictive insights.
|
Good practices: keep punto0โpunto7 synchronized with the Mule/Azure implementation, record CI/CD/telemetry instructions per environment, and drive runbooks from Application Insights metrics so the portal stays resilient and well-documented.
|
| 26 |
EvalTecnicoCeibaCoachTech22Jan2026 |
|
|
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
โจ Project architecture: The Ceiba Coach Tech evaluation revolves around the vehicle theft detection scenario found in docs/index.html and the supporting strategy files; the requirements list (RQ-001 to RQ-039) defines cross-country coverage, low-resource device constraints (2-core CPU, 1โฏGB RAM, GSM), security/LDAP/SOAP/REST integrations, and resiliency expectations (offline storage, batteries, network outages). The documented architecture spans lifecycle management (punto0), design inputs (punto1), interoperability proposal (punto2), deliverables (punto3), implementation steps (punto4), deployment needs (punto6), and operations (punto7), so the platform is depicted as a hybrid solution with MuleSoft/Azure-style APIs, secure device telemetry ingestion, multi-provider cloud abstraction, and analytics layers that visualize stolen vehicle events on maps, correlate GPS/camera feeds, and run trend dashboards.
| Layer |
Responsibility |
| ๐ Requirements & Lifecycle |
Punto0-Punto3 articulate how to capture requirements, define actors, and deliver architecture/integration artifacts for the coach role. |
| ๐ฅ Implementation & Security |
Punto4-Punto5 cover the device constraints, messaging flows, telemetry ingestion, API security, and integration patterns with police systems plus authentication sources. |
| โ๏ธ Deployment & Operations |
Punto6-Punto7 explain deployment needs, scaling/HA, and operational handoff (monitoring, runbooks, offline recovery, change management). |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
Documentation keeps responsibilities distinct, but code would benefit from stricter DI between telemetry ingestion, trending dashboards, and security policies. |
| ๐ Hexagonal architecture |
Partial |
Requirement IDs and punto docs act like ports/adapters, yet explicit boundary implementations (e.g., telemetry adapters, analytics ports) would formalize the hexagonal split. |
| ๐ Event-driven |
Yes |
Device data flows, GSM reliability, and offline sync imply event-driven ingestion; the docs mention map-based event dashboards, so reactive pipelines already govern the story. |
| ๐ค AI |
No |
No AI modules are described; adding LLM-assisted investigations or predictive theft scoring would layer intelligence. |
| ๐ง Machine Learning |
No |
ML models arenโt part of the current enunciado; building classifiers for hotspots or anomaly detection would anchor an ML narrative. |
|
รฐลธโห How to bridge the gaps: codify adapters for telemetry/analytics ingestion, emit events for every GSM upload, and wrap optional AI/ML scoring engines behind new ports so hexagonal intent remains testable while adding intelligence.
|
Good practices: keep every requisito synced with the implementation/diagrams, document tradeoffs (latency, device limits, security), and trace runbooks from punto7 so the coach role can defend the systemโs resilience.
|
| 27 |
PruebaTecnicaFullStackAIEngineerVivetori21Ene2026 |
|
|
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
โจ Project architecture: The 2026 full-stack AI engineer exercise spans the `frontend` dashboard, `python-api`, `supabase` storage, `n8n-workflow` orchestrations, and richly documented steps inside `docs/index.html`, `docs/paso-2-1-supabase.html` โฆ `docs/paso-2-4-dashboard.html`. The README/commands explain how Supabase tables store telemetry, the Python FastAPI (AI/ML) backend exposes inference endpoints, n8n automations stitch together ingestion + notifications, and the React/Vite UI renders the AI dashboard images found in `docs/Dashboard.png` plus Render/Supabase/Netlify deployments. Every layer is aligned with the evidence page, the requirements reflect best practices (continuous integration, vector embeddings, and observability), and the docs describe how data flows from Supabase through API predictions into the dashboard while back-end services can be swapped thanks to feature-flagged config.
| Layer |
Responsibility |
| ๐งฎ Data & Storage |
`supabase/` schemas capture vehicle/asset data plus AI prompts; the docs show how Supabase tables roll up analytics and feed the Python API. |
| ๐ค AI Backend |
`python-api/` hosts FastAPI inference + ML helpers that read Supabase, call embeddings/LLMs, and emit responses to the UI or downstream automations. |
| ๐๏ธ Automation & Orchestration |
`n8n-workflow/` coordinates ingestion triggers, notifications, and dataset refresh tasks so the full-stack system reacts to new AI signals. |
| ๐ฅ๏ธ Frontend |
React/Vite dashboard renders KPI cards, timeline charts, and AI insights while referencing the deployment screenshots (`Render.png`, `Netlify.png`). |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Yes |
Docs keep responsibilities separate (frontend vs API vs workflows) and the Python service relies on configuration/ports so new ML models can be injected without changing the controller. |
| ๐ Hexagonal architecture |
Partial |
Adapters (Supabase, n8n, frontend) are isolated but explicit ports/interfaces for the Python API vs Supabase/Karma clients would formalize the pattern. |
| ๐ Event-driven |
Yes |
n8n workflows plus Supabase triggers push automation events and keep dashboards synchronized. |
| ๐ค AI |
Yes |
FastAPI returns AI/ML answers drawn from Supabase embeddings and the doc evidence page catalogs predictions and KPI narratives. |
| ๐ง Machine Learning |
Yes |
AI pipelines compute embeddings, send them through LLMs, and the reusable `requirements.txt` plus `evidence.html` prove the ML rigor. |
|
รฐลธโห How to expand the missing patterns: declare explicit ports between Supabase and Python API, formalize adapter contracts for n8n triggers, and emit additional domain events when AI insights refresh so every layer stays testable.
|
Good practices: keep `useful_commands.txt` and README aligned with deployment scripts, version the Supabase schema, provide observability for the n8n runs, and run the documented tests before publishing the dashboard (Render/Netlify) so the full-stack AI story remains reproducible.
|
| 28 |
MCortesGranadosIIsNativeCppSystemsLab |
|
|
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|
โจ Project architecture: The native C++ systems lab is documented under E:\MCortesGranadosIIsNativeCppSystemsLab and centers on the IIS-native module inside IisNativeCppHighPerformanceService plus the rich visualization set in docs/ (start_project, single_repo, repo, project_focus, portfolio, POC, lean portfolio, run Visual Studio IIS). The narrative explains how Visual Studio builds the DLL, the module attaches to IIS to keep Request and Worker threads light, and the documentation surfaces the high-performance focusโthread-safe callbacks in dllmain.cpp and support helpers in framework.h keep the module ready for telemetry streaming and the screenshots reinforce rollout stories for the dashboards, renderings, and proof-of-concept prototypes.
| Layer |
Responsibility |
| โ๏ธ Native Compute |
The Visual Studio solution compiles the IIS-native DLL, wires the entry points in dllmain.cpp and framework.h, and exposes performant callbacks so the host can keep HTTP threads very lean. |
| ๐ Visualization Docs |
docs/ explains bootstrap steps, repository flow, POC strategy, lean portfolio, and runbook details that show how each native artifact lands in the portfolio dashboards. |
| ๐ Integration Story |
Screenshots and visual guides describe how the native module can plug into IIS, feed telemetry to dashboards, and form part of a hybrid architecture with managed services for analytics. |
| Pattern |
Applied? |
Evidence / Action |
| โ
SOLID |
Partial |
DLL entry points and framework helpers keep focused responsibilities, but stronger dependency management between the IIS host and the helper classes would strengthen SOLID. |
| ๐ Hexagonal architecture |
Partial |
Docs hint at adapters (IIS host, dashboards) but formal ports/interfaces around telemetry connectors would finish the hexagonal narrative. |
| ๐ Event-driven |
Yes |
IIS callbacks plus the high-throughput native loops serve as event-driven responders for telemetry and visualization updates. |
| ๐ค AI |
No |
No AI or ML layers are shown; adding anomaly detection or inference on the telemetry stream would bring AI to the platform. |
| ๐ง Machine Learning |
No |
ML model delivery is not part of this lab; integrating classification/scoring engines would fill this lane. |
|
รฐลธโห How to expand the missing patterns: codify ports between IIS and telemetry adapters, emit explicit events for each native state change, and wrap optional AI/ML scoring behind those ports so the native DLL stays deterministic while future analytics plug in cleanly.
|
Good practices: keep the Visual Studio/IIS runbooks aligned with the docs, document telemetry expectations inside the portfolio/POC pages, and describe how each DLL export maps to the dashboards so the lab is reproducible and enterprise-ready.
|
| 29 |
GenaiRagPipelineDemoRevStarConsultingMCG13Jan2026 |
|
|
|
๐ View Repoโ |
๐ View Documentationโ๏ธ |
|