Dive deeper. Aim higher.
At Abysalto, that’s not just a motto — it’s how we work. We build serious tech for a variety of clients, but we keep things simple, fast, and focused. We’re a team driven by determination, expertise, and courage — and we’re looking for someone who shares that mindset. Someone ready to take ownership, solve real challenges, and make an impact where it matters. Ready to dive in? Join us as a AI Data Engineer!

🔷What you will do?
- Design, manage, and optimize SQL, NoSQL, and vector databases that fuel our AI systems
- Develop and maintain data pipelines for training, ingestion, and inference — working with structured, unstructured, and vectorized data
- Integrate graph databases and knowledge graphs (e.g., Neo4j) into AI workflows
- Automate data workflows and API-based integrations using Python
- Collaborate with AI engineers, backend developers, and DevOps to ensure data is accessible, consistent, and reliable
- Implement data validation, versioning, and lineage tracking strategies to support scalable experimentation
- Monitor and optimize database performance, write advanced queries, and troubleshoot data-related issues
- Contribute to the evolution of our AI infrastructure by aligning data architecture with future use cases
🔷What we expect from you?
- Strong SQL and NoSQL database skills (PostgreSQL, MongoDB, etc.)
- Hands-on experience with vector databases such as FAISS, Milvus, Pinecone, or PostgreSQL (w/pgvector)
- Familiarity with graph databases (e.g., Neo4j) and the Cypher query language
- Proficient in Python for database scripting, ETL automation, and API communication
- Basic experience with Linux environments, Docker, and a cloud platform (GCP or Azure preferred)
- Understanding of NLP pipelines and how text embeddings, tokenization, and pre-processing fit into AI workflows (e.g., with BERT, SpaCy, NLTK)
- Good communication skills in English and a proactive, problem-solving mindset
🔶Nice-to-haves
- Experience designing and maintaining APIs (e.g., for data access or annotation tools)
- Knowledge of data modeling best practices, query tuning, and performance optimization
- Experience with backup/recovery strategies and database security
- Familiarity with data versioning tools (like DVC or Delta Lake)
*We’re hiring at junior to mid-level. If you’re at the start of your career but have strong personal/academic projects and eagerness to learn, we’d still like to hear from you!
🔷What we offer?
- Mentorship and growth in an experienced and encouraging team (work side-by-side with senior AI experts and backend engineers)
- Continuous professional development through training and conferences
- Flexible working hours with the option of hybrid work
- Work in an agile environment following SCRUM methodology
- Pleasant and relaxed work environment with various perks (top-quality Herman Miller Aeron chairs, high-end equipment, discounts with partner companies)
- All perks and benefits can be found on our career page
- The chance to launch AI products that improve millions of daily interactions
We solve complex technological challenges in order to simplify and improve everyday lives of millions of people. Our goal is to become a leader in the software industry, recognized for excellence and quality.
If you're ready to shape what’s yet to be — send us your CV and a short note about or a GitHub link to a Python or ML project you’re proud of.
Apply via the link below.
We look forward to meeting you!