Trabajo Data Engineer
Estás son las últimas ofertas de trabajo de Data Engineer encontradas.
empleos encontrados
-
09/12/2025
Amazonas, Antioquia, Arauca, Atlántico, Bolívar, Boyacá, Caldas, Caquetá, Casanare, Cauca, Cesar, Chocó, Córdoba, Cundinamarca, Guainía, Guaviare, Huila, La Guajira, Magdalena, Meta, Nariño, Norte de Santander, Putumayo, Quindío, Risaralda, Santander, Sucre, Tolima, Valle del Cauca, Vaupés, Vichada, San Andrés, Providencia y Santa Catalina, Bogotá
Role: Senior Bilingual MLOps Engineer - Remote Your role: MIAP Performance Monitoring: Includes code refactoring and enhancements on a dynamic, two-step regression model used to forecast performance of different shipboard systems (e.g. HVAC, Service Power). Fuel Forecasting: Creating an enhanced fuel forecasting model accounting for additional features (e.g. fuel type mix, speed, weather) and serving model through a Container App or Endpoint for Business Planning Teams to simulate fuel costs based on different deployment scenarios. Azure DevOps/MLOps: A MLOps Engineer will enhance the AzureDevOps/MLOps capabilities of MARC. Proper adherence to best practices in MLOps will ensure that MARC's pricing and inventory automation processes are being properly developed, deployed, and monitored in a secure and efficient manner to reduce operational risks and increase system scalability. Required: Machine Learning frameworks such as TensorFlow, PyTorch etc. Creating and maintaining Python libraries for machine learning that provides a range of algorithms for classification, regression, clustering etc. Skills to write efficient and scalable code for data processing, model training & deployment. Selecting appropriate ML algorithms and training them on the preprocessed data to create accurate predictive models. Cloud computing platform experience (preferably in Azure). Cloud based ML Services (preferred Azure ML). Azure services, such as ADF, Azure Functions, Azure, Cosmos DB etc. Azure DevOps Principles & practices, including CI/CD pipelines. Azure Security Management for security policies, access controls. Azure Active Directory, network security groups. Strong data analytics skills for data profiling to train the model. What We Offer: Stable Employment: Permanent contract offering long-term job security. Learning & Development: Access to a wide range of online training platforms and professional development resources. Language Training: Weekly virtual English classes and conversation sessions with certified instructors. Online Courses for different languages. Health Coverage: Comprehensive prepaid medical and dental plans. Insurance Protection: Life and accident insurance for peace of mind. Wellness Perks: Discounts and benefits through fitness and technology partnerships. Special Occasion benefits. Labor Conditions: Workplace: Colombia. Type of Contract: Fixed-term. Salary: to be agreed according to experience. This vacancy is disclosed through ticjob.co
-
26/11/2025
Amazonas, Antioquia, Arauca, Atlántico, Bolívar, Boyacá, Caldas, Caquetá, Casanare, Cauca, Cesar, Chocó, Córdoba, Cundinamarca, Guainía, Guaviare, Huila, La Guajira, Magdalena, Meta, Nariño, Norte de Santander, Putumayo, Quindío, Risaralda, Santander, Sucre, Tolima, Valle del Cauca, Vaupés, Vichada, San Andrés, Providencia y Santa Catalina, Bogotá
Role: Data Scientist Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Requirements: Advanced English skills (B2+). Experience in ETL and Azure tools: A minimum of 2-5 years of experience in developing ETL processes using Azure Data Factory, Azure Databricks, and other Azure cloud tools. Knowledge of Apache Spark and Databricks: Experience using Apache Spark for distributed data processing and knowledge of Databricks for data integration and transformation. Proficiency in Python and SQL: Advanced skills in Python for automating processes and manipulating data, along with a strong understanding of SQL for querying and transforming data. Handling large volumes of data: Ability to manage, transform, and optimize large volumes of data using cloud platforms and processing technologies like Spark. Teamwork and communication: Collaborative skills with multidisciplinary teams and effective communication ability to interact with both technical and non-technical stakeholders. Responsabilities: Develop and maintain efficient ETL pipelines using Azure Data Factory and Databricks to process and transform large datasets for analysis. Analyze large datasets and build machine learning models using Python and Spark to extract actionable insights and solve business problems. Utilize Apache Spark to process and analyze large volumes of data in a distributed environment, ensuring scalability and performance. Deploy machine learning models to production using Azure and Databricks, and monitor their performance for continuous improvement. Work closely with cross-functional teams (data engineers, business analysts, etc.) to understand requirements and communicate insights and results effectively to stakeholders. What You´ll Love About Working Here: We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Out array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certification in the latest technologies. Labor Conditions: Workplace: Colombia. Type of Contract: indefinitely. Salary: to be agreed according to experience. This vacancy is disclosed through ticjob.co
-
14/11/2025
Bogotá
Rol: Arquitecto de Datos Experiencia: Especialziación en Datos o áreas afines. Experiencia Sólida (mínimo dos (2) años) en Arquitectura de Datos, liderando el diseño de soluciones nativas en la nube de Azure. Habilidades Técnicas: Dominio comprobado de la plataforma Microsoft Fabric (Lakehouse, Data Warehouse, Pipelines). Experiencia profunda con el ecosistema de datos de Azure (ADLS, Synapse, etc.). Sólidos conocimientos en múltiples técnicas de modelado de datos (dimensional, relacional). Experiencia definiendo estrategias de gobierno y calidad de datos." Liderar el diseño de la arquitectura de datos end-to-end sobre Microsoft Fabric y Azure. Definir y documentar la implementación de arquitecturas modernas como Medallion (Bronze, Silver, Gold). Diseñar patrones de ingesta de datos para fuentes en tiempo real (streaming) y batch. Establecer los lineamientos para el modelado de datos, gobierno de datos y calidad de datos. Colaborar con los equipos de seguridad para definir e implementar los controles de acceso. Evaluar y seleccionar las herramientas y servicios cloud más adecuados para cada desafío técnico. Condiciones Laborales: Lugar de Trabajo: Bogotá. Modalidad de Trabajo: Híbrido. Tipo de Contrato: A Término Indefinido. Salario: A convenir de acuerdo a la experiencia. Esta vacante es divulgada a través de ticjob.co
-
13/11/2025
Bogotá
¡Participa y se parte de ProCibernética! Si cuentas con experiencia en implementar y desplegar proyectos de analítica de datos, esto es para ti. Seguro te llama la atención o conoces a alguien que podria interesarle. Rol: Ingeniero(a) de Datos Estos son algunos requisitos del rol: Profesional en Ingeniería de Sistemas o áreas afines. Experiencia: Dos (2) años en Big Data. Conocimientos: Excelente manejo de lenguajes de programación Python y SQL. Conocimiento en despliegues en ambientes de nube. Nivel de inglés intermedio para lectura técnica y documentación. Deseable: Conocimiento en Hadoop y arquitecturas distribuidas. Funciones Principales: Diseñar, desarrollar y mantener pipelines de datos (ETL/ELT). Modelar y estructurar bases de datos relacionales y no relacionales. Administrar y optimizar el rendimiento de bases de datos. Implementar procesos de limpieza, validación y calidad de datos. Documentar flujos, modelos y definiciones de datos. Automatizar procesos de ingestión, transformación y carga de datos. Colaborar con equipos de analítica, ciencia de datos y negocio. Configurar monitoreo y alertas en los procesos de datos. Dar soporte técnico y resolver incidentes en flujos de datos. Evaluar e implementar nuevas herramientas y tecnologías de datos. Aplicar prácticas de seguridad y cumplimiento normativo en el manejo de datos. Ofrecemos: Medio día libre por tu cumpleaños. Bono de alimentación mensual. Días compensatorios por antiguedad a partir de 5 años. Condiciones Laborales: Lugar de Trabajo: Bogotá. Modalidad de Trabajo: Híbrido. Tipo de Contrato: A término indefinido. Salario: Competitivo según la experiencia y el perfil. Esta oferta de trabajo es publicada bajo la propiedad exclusiva de ticjob.co