Jobs Data Engineer Cloud Technologies
Estás son las últimas ofertas de trabajo de Data Engineer Cloud Technologies encontradas.
job(s)
-
12/12/2025
Other
Estamos buscando a nuestro(a) próximo(a) Desarrollador(a) de Analítica de Datos, quien será responsable por diseñar, desarrollar y mantener los procesos ETL (Extract, Transform, Load) para la extracción, limpieza y transformación de datos desde diversas fuentes hasta la bodega central de datos, garantizando la integridad y calidad de los datos. Además, se encargará del procesamiento y cálculo de información para la construcción de modelos analíticos, así como el acompañamiento al usuario final en los procesos de validación, certificación y adopción de las soluciones implementadas. Funciones de monitoreo y control de los procesos de ETL. Si cumples con el siguiente perfil no dudes en postularte: Formación Académica: Profesional en Ingeniería de Sistemas, Ingeniería Electrónica, Informática o carreras afines. Experiencia: Superior a 3 años participando en proyectos de analítica de datos, en implementación de proyectos de Inteligencia de negocios / DWH en roles de analista y/o desarrollador. En el uso de herramientas de integración ETL, construcción de estructuras de análisis de información y desarrollo de procesos automatizados de visualización de datos utilizando modelos dimensionales desde bodegas de datos Conocimientos Específicos: En modelado y arquitectura de información, arquitecturas de nube (AWS / Azure), herramientas y procesos de integración ETL y de visualización Power BI, en programación Python, SQL y T-SQL, bases de datos relacionales, Redshift, Lambda, Step Functions. Apreciamos la diversidad de talentos en nuestro equipo humano. Promovemos un ambiente laboral inclusivo, respetuoso y colaborativo. Nuestros procesos de selección valoran las habilidades, experiencia y potencial de cada persona, independientemente de su género, edad, orientación sexual y otro tipo de condición. La variedad de profesionales nos complementa y enriquece los resultados organizacionales*.
-
09/12/2025
Amazonas, Antioquia, Arauca, Atlántico, Bolívar, Boyacá, Caldas, Caquetá, Casanare, Cauca, Cesar, Chocó, Córdoba, Cundinamarca, Guainía, Guaviare, Huila, La Guajira, Magdalena, Meta, Nariño, Norte de Santander, Putumayo, Quindío, Risaralda, Santander, Sucre, Tolima, Valle del Cauca, Vaupés, Vichada, San Andrés, Providencia y Santa Catalina, Bogotá
Role: Senior Bilingual MLOps Engineer - Remote Your role: MIAP Performance Monitoring: Includes code refactoring and enhancements on a dynamic, two-step regression model used to forecast performance of different shipboard systems (e.g. HVAC, Service Power). Fuel Forecasting: Creating an enhanced fuel forecasting model accounting for additional features (e.g. fuel type mix, speed, weather) and serving model through a Container App or Endpoint for Business Planning Teams to simulate fuel costs based on different deployment scenarios. Azure DevOps/MLOps: A MLOps Engineer will enhance the AzureDevOps/MLOps capabilities of MARC. Proper adherence to best practices in MLOps will ensure that MARC's pricing and inventory automation processes are being properly developed, deployed, and monitored in a secure and efficient manner to reduce operational risks and increase system scalability. Required: Machine Learning frameworks such as TensorFlow, PyTorch etc. Creating and maintaining Python libraries for machine learning that provides a range of algorithms for classification, regression, clustering etc. Skills to write efficient and scalable code for data processing, model training & deployment. Selecting appropriate ML algorithms and training them on the preprocessed data to create accurate predictive models. Cloud computing platform experience (preferably in Azure). Cloud based ML Services (preferred Azure ML). Azure services, such as ADF, Azure Functions, Azure, Cosmos DB etc. Azure DevOps Principles & practices, including CI/CD pipelines. Azure Security Management for security policies, access controls. Azure Active Directory, network security groups. Strong data analytics skills for data profiling to train the model. What We Offer: Stable Employment: Permanent contract offering long-term job security. Learning & Development: Access to a wide range of online training platforms and professional development resources. Language Training: Weekly virtual English classes and conversation sessions with certified instructors. Online Courses for different languages. Health Coverage: Comprehensive prepaid medical and dental plans. Insurance Protection: Life and accident insurance for peace of mind. Wellness Perks: Discounts and benefits through fitness and technology partnerships. Special Occasion benefits. Labor Conditions: Workplace: Colombia. Type of Contract: Fixed-term. Salary: to be agreed according to experience. This vacancy is disclosed through ticjob.co
-
26/11/2025
Amazonas, Antioquia, Arauca, Atlántico, Bolívar, Boyacá, Caldas, Caquetá, Casanare, Cauca, Cesar, Chocó, Córdoba, Cundinamarca, Guainía, Guaviare, Huila, La Guajira, Magdalena, Meta, Nariño, Norte de Santander, Putumayo, Quindío, Risaralda, Santander, Sucre, Tolima, Valle del Cauca, Vaupés, Vichada, San Andrés, Providencia y Santa Catalina, Bogotá
Role: Data Scientist Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Requirements: Advanced English skills (B2+). Experience in ETL and Azure tools: A minimum of 2-5 years of experience in developing ETL processes using Azure Data Factory, Azure Databricks, and other Azure cloud tools. Knowledge of Apache Spark and Databricks: Experience using Apache Spark for distributed data processing and knowledge of Databricks for data integration and transformation. Proficiency in Python and SQL: Advanced skills in Python for automating processes and manipulating data, along with a strong understanding of SQL for querying and transforming data. Handling large volumes of data: Ability to manage, transform, and optimize large volumes of data using cloud platforms and processing technologies like Spark. Teamwork and communication: Collaborative skills with multidisciplinary teams and effective communication ability to interact with both technical and non-technical stakeholders. Responsabilities: Develop and maintain efficient ETL pipelines using Azure Data Factory and Databricks to process and transform large datasets for analysis. Analyze large datasets and build machine learning models using Python and Spark to extract actionable insights and solve business problems. Utilize Apache Spark to process and analyze large volumes of data in a distributed environment, ensuring scalability and performance. Deploy machine learning models to production using Azure and Databricks, and monitor their performance for continuous improvement. Work closely with cross-functional teams (data engineers, business analysts, etc.) to understand requirements and communicate insights and results effectively to stakeholders. What You´ll Love About Working Here: We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Out array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certification in the latest technologies. Labor Conditions: Workplace: Colombia. Type of Contract: indefinitely. Salary: to be agreed according to experience. This vacancy is disclosed through ticjob.co
-
14/11/2025
Bogotá
Rol: Arquitecto de Datos Experiencia: Especialziación en Datos o áreas afines. Experiencia Sólida (mínimo dos (2) años) en Arquitectura de Datos, liderando el diseño de soluciones nativas en la nube de Azure. Habilidades Técnicas: Dominio comprobado de la plataforma Microsoft Fabric (Lakehouse, Data Warehouse, Pipelines). Experiencia profunda con el ecosistema de datos de Azure (ADLS, Synapse, etc.). Sólidos conocimientos en múltiples técnicas de modelado de datos (dimensional, relacional). Experiencia definiendo estrategias de gobierno y calidad de datos." Liderar el diseño de la arquitectura de datos end-to-end sobre Microsoft Fabric y Azure. Definir y documentar la implementación de arquitecturas modernas como Medallion (Bronze, Silver, Gold). Diseñar patrones de ingesta de datos para fuentes en tiempo real (streaming) y batch. Establecer los lineamientos para el modelado de datos, gobierno de datos y calidad de datos. Colaborar con los equipos de seguridad para definir e implementar los controles de acceso. Evaluar y seleccionar las herramientas y servicios cloud más adecuados para cada desafío técnico. Condiciones Laborales: Lugar de Trabajo: Bogotá. Modalidad de Trabajo: Híbrido. Tipo de Contrato: A Término Indefinido. Salario: A convenir de acuerdo a la experiencia. Esta vacante es divulgada a través de ticjob.co