ansible Remote Jobs

214 Results

+30d

Solution Engineer (OTC) - REF3164X

Deutsche Telekom IT SolutionsBudapest, Debrecen, Szeged, Pécs, Hungary, Remote
DevOPSagilejirasqlansibleapiopenstacklinuxpython

Deutsche Telekom IT Solutions is hiring a Remote Solution Engineer (OTC) - REF3164X

Job Description

The Public Cloud Portfolio Unit operates on a national and international level, for medium-sized and large companies. We develop, market and operate agile, cloud-native, forward-looking products and services for the digital world. We see ourselves as innovation drivers and make our customers' business fit for the digital future. Our mission: Together with our customer, shaping the safest, easiest and most efficient transformation to a digitized and cloud-native future.

 

Your Department

We run Open Telekom Cloud! Open Telekom Cloud is a public cloud standard product based on open source community software and driven by principles of DevSecOps. Lean structures, agile methods, highly motivated teams and an extremely dynamic business environment determine our actions. With this customer-oriented and agile orientation, we are the anchor point for the Public Cloud business in Deutsche Telekom Group.

We are measured by delivering a secure, stable and innovative platform. We work jointly with our platform partner and other partners out of the OpenStack ecosystem to create a highly innovative public cloud product based on European security and data protection standards.

We are looking for people who are professionals and evangelists with a great deal of enthusiasm for cloud technology and who are up to the challenges created by the development and operation of a hyper-scale public cloud.

We offer a unique insight into how a large public cloud works under the hood, intercultural teamwork, flat hierarchies, and an independent working-style.

Your Tasks

As "Solution Engineer OTC" you understand the latest developments in cloud and container technology. You will operate and enhance our Open Telekom Cloud platform in a customer-oriented manner.

 

Do you like?

Solve complex problems in the daily operation of a hyper-scaler's cloud backend.

Consistently automate with common automation frameworks.

Work in a team of specialists where everyone helps each other in an open and trusting manner.

Work in proactive and agile way

Participate / coordinate in process oriented manner of the daily activities, incoming customer requests

Qualifications

Your Profile

  • Completed studies in a technical, engineering or scientific subject or comparable professional training.
  • 3-5 years of professional experience in IT (with focus on modern cloud technologies).
  • In-depth knowledge of Linux (e.g. networking, logging concepts), system tools (sudo, SSH) and network-related services (e.g. LDAP, NTP), Linux/Unix command line.
  • System technologies (Linux, KVM, Linux network and storage, system tools) as well as OpenStack
  • Strong Linux Administrations knowledge (e.g. RHCSA or. LFCS)
  • Experienced with Ansible
  • Shell scripting advanced experience
  • Python beginner level
  • Experience in OpenSource projects and community’s requirements.
  • Agile tools (e.g. GitHub, JIRA, Confluence) and methodologies (DevOps, Gitlab CI/CD)
  • High level of customer focus.
  • Ability to assess technical solutions and come up with creative approaches.
  • Fluency in written and spoken English
  • Advanced SQL knowledge about more complex queries.
  • Experience with data engineering e.g. pandas.
  • Experience with developing Grafana Dashboards.
  • Experience with analyzing and using API’s e.g. with curl, postman and python.
  • Advanced Linux shell skills.
  • Experience with writing HLD’s and describe complex processes.
  • Advanced Experience with multiple Cloud Services.

See more jobs at Deutsche Telekom IT Solutions

Apply for this job

+30d

AWS Architect - REF3288M

Deutsche Telekom IT SolutionsBudapest, Debrecen, Szeged, Pécs, Hungary, Remote
SalesDevOPSEC2Lambdaagile5 years of experienceterraformDesignansiblejavadockerpythonAWS

Deutsche Telekom IT Solutions is hiring a Remote AWS Architect - REF3288M

Job Description

Job Description

As a Senior AWS Solution Architect & Senior AWS Cloud Consultant you fulfill the following tasks:

  • You research and analyze available tools and frameworks and then implement those as cloud-native solutions for our customers
  • You assist the T-Systems Public Cloud Product Management Team to build and define solutions on top of AWS based on customer and market feedback
  • You support the customer engagement lifecycle from pre-sales to implementation and identify and support follow-up activities
  • You participate in or lead technical workshops as a cloud expert with partners and clients and implement Pilot and Proof of Concept projects
  • You advise clients on architecture methods, governance and the use of cloud-native best practices
  • You will run Well-Architected Reviews on customer workloads advising the customer on AWS best practices 
  • You will review existing customer environments, plan and implement migrations to AWS cloud by applying appropriate migration strategy (Relocate, Rehosting, Refactoring etc.) 
  • You develop automated cloud-native solutions following the IaC approach using appropriate AWS services, such as Lambda, Step Functions, Terraform, Cloudformation (including higher-level provision options like SAM or CDK)
  • You are responsible for expanding and improving the solution portfolio and you develop new features, templates and documentation 
  • You act as a coach and mentor for young and new team members
  • You take over technical sub-project management responsibility as a sparring partner between business and IT in interdisciplinary teams.
  • Independent steering of error analysis and problem-solutions together with the operation in the DevOps teams
  • You work in a highly dynamic, international environment with a highly motivated team

Qualifications

  • Next to a degree in computer science or another technical discipline, you have at least 5 years of experience in two or more of the following areas: software development, automation and containers, design & implementation of distributed cloud applications, agile methodologies
  • You have 2+ years of working experience with a hyperscaler cloud platform (preferably AWS)
  • Solid working experience in four or more of the listed technology areas: Relational Databases, Programming, Web Technologies, Security, Networking, Migration, Big Data, Machine Learning or Artificial Intelligence
  • You bring experience in AWS services for example Amazon EC2 compute, AWS Lambda, AWS EventBridge, AWS CloudWatch, Amazon RDS, Amazon ECS
  • Extensive experience in DevOps methodologies and automation with Shell Scripting, Docker, Terraform, Ansible, and more
  • You bring experience in development with NodeJS, Python, Java or another common programming language
  • You are experienced with complex, large scale environments and able to design a resilient, secured, and performant cloud infrastructure solution
  • Demonstrated experience developing an enterprise-scale distributed applications using modern software development practices and technologies is highly desired
  • Ideally, you have already client-facing experiences
  • Strong communication skills in English, (German is nice to have)
  • Flexibility and readiness to travel occasionally are expected

See more jobs at Deutsche Telekom IT Solutions

Apply for this job

+30d

Devops Engineer - Telekom Cloud Create (Open Telekom Cloud) - REF3156S

Deutsche Telekom IT SolutionsBudapest, Debrecen, Szeged, Pécs, Hungary, Remote
golangagilejiraansiblejavaopenstackkuberneteslinux

Deutsche Telekom IT Solutions is hiring a Remote Devops Engineer - Telekom Cloud Create (Open Telekom Cloud) - REF3156S

Job Description

We run Open Telekom Cloud! Open Telekom Cloud is a public cloud standard product based on open source community software and driven by principles of DevSecOps. Lean structures, agile methods, highly motivated teams and an extremely dynamic business environment determine our actions. With this customer-oriented and agile orientation, we are the anchor point for the Public Cloud business in Deutsche Telekom Group.

We are measured by delivering a secure, stable, and innovative platform. We work jointly with our platform partner and other partners out of the OpenStack ecosystem to create a highly innovative public cloud product based on European security and data protection standards.

We are looking for people who are professionals and evangelists with a great deal of enthusiasm for cloud technology and who are up to the challenges created by the development and operation of a hyper-scale public cloud. 

We offer a unique insight into how a large public cloud works under the hood, intercultural teamwork, flat hierarchies, and an independent working-style.

As an Engineer, you understand the latest developments in cloud and hardware technology. Customer-oriented, you will further develop the hardware architecture of our Open Telekom Cloud platform.

  • Be responsible for the structured expansion and upgrading of our hardware under hyper-scaler conditions.
  • Be a trusted advisor for public cloud technology.
  • Consistently automate with common automation frameworks (ansible puppet chef).
  • Get your hands "dirty" without reservations.
  • Work in a team of specialists where everyone helps each other openly and with trust...

Qualifications

  • Completed studies in a technical, engineering or scientific subject or comparable professional training. 
  • 3-5 years of professional experience in multi-cultural IT with a focus on various cloud and data centers. 
  • In-depth knowledge of virtualization servers, security and networking hardware.
  • Strong senior experience in Linux, security, and network-related services. 
  • Deep practice in any Cloud environment.
  • Kubernetes (Experience to operate Kubernetes)
  • Helm chart (Experience to develop or contribute to a Helm chart)
  • Ansible
  • Golang or Java is nice to have
  • High level of customer focus.
  • Knowledge of agile development processes.
  • Ability to assess technical solutions and come up with new creative solutions.
  • Fluency in written and spoken English.
  • You will be working in the European Union to meet our customers' data security and privacy requirements. 
  • Open Mindset, working in squads.
  • Jira and Confluence (Atlassian software) experience in ticketing, and documentation tools are nice to have.

See more jobs at Deutsche Telekom IT Solutions

Apply for this job

+30d

Customer Success Engineer , French Speaking

DynatraceBarcelona, Spain, Remote
DevOPSBachelor's degreemobileansibleazureiosjavaopenstackandroidcsskubernetesjenkinsAWSjavascriptPHP

Dynatrace is hiring a Remote Customer Success Engineer , French Speaking

Job Description

  • Constantly go above and beyond to serve our customers and be a shining, standard-setting example of our Core Values 

  • Triage, diagnose, and provide solutions to most complex configuration issues with Dynatrace solutions and non-Dynatrace integrations 

  • Strategize on the overall technical objectives and long-term goals of the team 

  • Provide advice and guidance as the subject matter expert to ensure successful ongoing usage, adoption of the product, and foster growth of the customer’s footprint 

  • Be the customer’s advocate by knowing their goals and use cases, then suggesting process improvements, product adoption, configuration, and additional features to meet their requirements 

  • Provide web-based training to user groups to support organizational adoption 

  • Undertake discovery and education activities to identify opportunities for Dynatrace usage across organizational functions and processes 

  • Providing coaching to CSE’s to help them grow in their technical knowledge and personally 

  • Function as a frontline technical resource for “best practice” and informal customer questions 

  • Engage with customer support as a customer advocate to ensure speedy resolution of customer issues 

  • Engage with Product management as the customer advocate on product roadmap discussions 

  • Participate and prepare for Monthly and Quarterly Business Reviews with customers 

  • Maintain current functional and technical knowledge of Dynatrace products and services 

  • Help to document best practices in developing and using Dynatrace 

  • Partner with support engineers, PM, and R&D to help customers and account teams to speed resolution. Help communicate, escalate and advocate on behalf of the customer  

  • Provide insights, advice, and ‘street credibility’ with technical teams to understand technical issues and possible workarounds 

  • Help customers and account teams to understand support ticket trends/themes to be used to develop success plans, enablement advice, etc.  

  • Have deep understanding of customers’ infrastructure, architecture, and business/regulatory requirements to speed up resolution 

Qualifications

  • Education: Bachelor's degree in Computer Science, Information Technology, or equivalent work experience   

  • Work experience 4+ years of experience 

  • Experience working with large enterprise customers, including executive leadership 

  • Demonstrated ability in leadership, mentorship, and organizational behavior 

  • A track record of going above and beyond for your team and customers 

  • Ability to manage executive relationships and discussions (VP/CxO) 

  • Must have exceptional English and French written and verbal communications skills, as well as organizational and teamwork skills, and the ability to act fast and responsibly ;

  • Impeccable time management skills and an ability to self-direct 

  • Demonstrated experience being a Subject Matter Expert (SME) for Dynatrace technologies, methodologies, frameworks, and 3rd party technologies related to Dynatrace  

  • Willingness to learn new technologies and resolve complex technical issues 

  • Professional Level Dynatrace certification (or get certification within six months) 

  • Two or more industry-relevant Associate Level certifications (AWS, Azure, k8s, …) 

  • Strong technical understanding and experience in SaaS industry 

  • Knowledge and experience with one or more of the following technologies related to Dynatrace:   

  • Cloud/new stack technologies such as OpenStack, OpenShift, AWS, Azure, Google Cloud, Cloud Foundry, Kubernetes, SAP, etc.   
  • Web and application server technologies such as Apache, IIS, WebSphere, WebLogic, and JBoss   
  • Server/Server-side technologies such as Java Servlets, PHP, HTML, CSS, JavaScript, and Ajax   
  • Mobile application technologies such as iOS and Android Webkit   
  • DevOps toolchain applications such as Ansible, Jenkins, Chef, Puppet, etc. 
  • CMDB/ITSM Technologies/platforms such as ServiceNow and BMC 
  • Must be customer service oriented and believe in teamwork, collaboration, adaptability & Initiative 
  • Demonstrable success in thinking strategically and executing tactically while providing consistent and high customer satisfaction and retention levels in a fast-paced environment 

See more jobs at Dynatrace

Apply for this job

+30d

DevOps Team Lead

RipjarLondon or Bristol or Cheltenham,England,United Kingdom, Remote Hybrid
DevOPSjiraterraformDesignansibleazureAWS

Ripjar is hiring a Remote DevOps Team Lead

Ripjar specialises in the development of software and data products that help governments and organisations combat serious financial crime. Our technology is used to identify criminal activity such as money laundering and terrorist financing and enables organisations to enforce sanctions at scale to help combat rogue entities and state actors.

Team Mission:

The core infrastructure team at Ripjar is responsible for commissioning and maintaining the underlying IT infrastructure that supports the company's data analytics and intelligence solutions. These systems are provisioned in a hybrid public/private cloud environment and include the underlying clusters used for large scale analytics as well as internal tooling and customer facing SaaS service. 

Position Overview:

The DevOps Team Lead will oversee the day-to-day management of the core-infrastructure team (currently 5 headcount), ensuring the efficient provisioning, monitoring, maintenance, and troubleshooting of our mixed public and private cloud environment. This role requires a strategic mindset to design and implement infrastructure improvements while managing performance, capacity, and cost. The role holder will collaborate closely with Product, Delivery, Engineering, and Security to align infrastructure capabilities with business needs alongside regulatory requirements.

Key Responsibilities:

Team Leadership

  • Coordination: Oversee the day-to-day activities of the operations team, ensuring that processes run smoothly and efficiently. This includes assigning tasks, monitoring progress, and addressing any issues that arise.
  • Technical Oversight: Design and implement improvements to existing infrastructure as well as new services. Evaluate the benefits of third-party managed solutions vs internal provision. 
  • Performance Management: Assess and improve the performance of core-infrastructure team members, fostering a culture of continuous development.

Operations Management

  • Process Management: Establish and optimise processes that enable the team to independently handle routine tasks.
  • Jira Service Desk: Operate an internal facing service desk ensuring triage and timely ticket management as well as evolving ticket types to streamline support requests.
  • Out-of-Hours Support: Coordinate out-of-hours support activities, ensuring a collective knowledge base for non-trivial SaaS support issues.
  • Incident Response: Manage and contribute to incident response efforts for infrastructure-related issues, ensuring timely resolutions.

Capacity & Cost Management

  • Capacity Planning: Conduct infrastructure capacity planning, utilising metrics to inform decisions and ensure readiness for business scaling.
  • Cost Tracking & Optimization: Monitor and optimise costs associated with infrastructure and services, ensuring alignment with budgetary goals. 

Compliance & Audits

  • Compliance: Manage and contribute to recurring annual compliance activities, including ISO27001 and SOC2 audits, in collaboration with the respective audit teams and third-party advisors.
  • Security: Ensure security best practice including identifying potential threats and vulnerabilities, designing secure software systems, and implementing robust security measures.
  • Disaster Recovery Testing: Participate in disaster recovery testing, ensuring robust recovery processes are in place.

In addition to the above the role holder should remain technically proficient such that they can contribute to the daily activities of the team including provisioning, monitoring, maintenance, and troubleshooting of our core services.

Requirements:

  • Minimum of 5 years in operations management, particularly within a platform / core infrastructure team (or equivalent).
  • Proven ability to lead, mentor, and develop team members, fostering a culture of continuous improvement.
  • Proficiency in managing hybrid cloud environments (both public and private) and familiarity with relevant technologies and platforms (e.g., AWS, Azure, Google Cloud). Our production workloads are currently hosted in AWS. 
  • Proficiency in infrastructure provisioning, systems administration and monitoring tools. We use Terraform, Ansible, k8s and Datadog to manage a range of RHEL/Rocky 9 hosts. Our analytics clusters make use of Spark, HBASE and HDFS. 
  • Experience in designing and implementing scalable infrastructure solutions, ideally with some exposure to parallel processing environments used for large-scale analytics.
  • An appreciation of security best practice in areas such as network security, threat modelling, vulnerability assessment, IAM, SIEM and incident response. 
  • Skills in system monitoring, performance tuning, and troubleshooting infrastructure and micro-service-based architectures.
  • Understanding of compliance frameworks like ISO 27001 and SOC 2, and experience in managing audits and compliance activities.
  • Familiarity with incident response processes and tools, ensuring timely resolution of issues.

Benefits:

  • Competitive salary DOE
  • 25 days annual leave, rising to 30 days after 5 years of service.
  • Flexible Hybrid working - 2 days in the office and 3 days at home
  • 35 hour working week.
  • Company Share Scheme.
  • Private Family Healthcare.
  • Employee Assistance Programme.
  • Company contributions to your pension (Salary exchange scheme)
  • Enhanced maternity/paternity pay.
  • The latest tech including a top of the range MacBook Pro.
  • Free food and drink
  • Hybrid working from our Cheltenham, Bristol or London offices

Ripjar’s Commitment to Diversity

“Diversity is essential in the way we operate. Having people from different backgrounds, genders and experiences ensures that we make decisions with a truly global perspective. Diversity gives us strength in our technology, analysis and relationships.” - Maria Cox, Head of People Operations

Apply for this job

See more jobs at Ripjar

Apply for this job

+30d

Devops Engineer

OnitRemote
Full TimeDevOPSS3SQSEC2terraformpostgressqlansiblegitrubykuberneteslinuxjenkinspythonAWS

Onit is hiring a Remote Devops Engineer

Devops Engineer - Onit - Career PageSee more jobs at Onit

Apply for this job

+30d

Consultant Data Cloud H/F

Business & DecisionToulouse, France, Remote
DevOPSterraformansibleazuregitpython

Business & Decision is hiring a Remote Consultant Data Cloud H/F

Description du poste

Rattaché à notre pôle IA/Big Data à Toulouse, vous rejoindrez une équipe dynamique dédiée à des projets techniques chez nos clients de proximité dans les domaines de l'intelligence artificielle et des architectures de donnée (modern data stack).

En tant que Consultant Technique Data Engineer Cloud, vous aurez un rôle central dans la mise en œuvre de solutions basées sur les principales plateformes Cloud, notamment Google Cloud Platform et Microsoft Azure et accompagnerez nos clients dans la transformation de leurs données en atouts stratégiques.

Ce que vous ferez :

  • Automatiser les processus d’intégration et de déploiement continu via des outils DevOps (Git, Terraform, Ansible).
  • Concevoir et développer des pipelines de données complexes et performants sur les plateformes Cloud  (DataProc, BigQuery, Cloud Functions, MS Fabric, DataBricks, DataFactory).
  • Optimiser les infrastructures cloud pour la gestion et le traitement de grandes volumétries de données afin d’assurer des infrastructures scalables tout en ayant une approche FinOps.
  • Collaborer avec des équipes pluridisciplinaires (Data Science, IT, métiers) pour comprendre les besoins fonctionnels et techniques.
  • Assurer une veille technologique pour intégrer les meilleures pratiques en Data Engineering et Cloud.
  • Diagnostiquer et résoudre des problèmes complexes liés à l'architecture, la performance ou la sécurité des pipelines de données.

Qualifications

  • Expérience : Minimum 5 ans d'expérience en Data Engineering, avec une expertise avérée sur des projets Cloud, idéalement sur GCP.
  • Compétences techniques :
    • Expertise en Python pour le développement de pipelines de données.
    • Solide maîtrise des services GCP, tels que DataProc, BigQuery et Cloud Functions ou des services Azure, tels que : Datafactory, Synapse (ou Databricks), Fabric
    • Connaissances en DevOps (Git, Terraform, Ansible) pour la gestion de l’infrastructure et des processus d'intégration continue.
  • Qualités humaines : Autonome, proactif(ve), et doté(e) d’excellentes compétences en communication pour interagir efficacement avec les équipes techniques et métiers.
  • Plus : Une expérience DevOps plus poussée est un atout. La maîtrise de l'anglais est un plus.

 

See more jobs at Business & Decision

Apply for this job

+30d

Stagiaire - Consultant Data Engineer - H/F

MAZARSCourbevoie, France, Remote
DevOPSRustS3tableauterraformansiblemongodbazureapigitc++dockerelasticsearchMySQLkubernetesubuntulinuxpythonAWS

MAZARS is hiring a Remote Stagiaire - Consultant Data Engineer - H/F

Description du poste

À votre arrivée, vous rejoignez notre équipe Data Services de Forvis Mazars, constituée de plus de 60 consultants spécialistes de la data, répartis sur 2 hubs (Paris La Défense, New York). Nous couvrons l’ensemble de la chaîne de valeur de la donnée : Data Strategy et qualification de cas d’usage, Gouvernance et qualité des données, Data Visualisation, Data Science, Data Engineering et Data Architecture. Notre histoire commence avec une petite équipe dédiée à Paris La Défense, qui s’est rapidement étendue à New York, reflétant notre croissance et notre ambition internationale. Nous croyons fermement que le Data Engineering est la pierre angulaire de cette industrie, et nous mobilisons une stack technologique riche et variée pour servir nos clients.

Après une phase de formation à nos méthodes et outils, vous intervenez sur des missions de Data Engineering sur un portefeuille de clients du CAC40 et SBF120, en France et à l’international :

  • Vous intervenez sur l’amélioration de la performance opérationnelle de nos clients au travers de l’exploitation et la valorisation de données sur des cas d’usage métier concrets (stratégie, marketing et vente, R&D, finance, RSE, etc.).
  • Vous animez le développement de bout en bout de flux de données, de l’extraction/transformation jusqu’à leur consommation (API, BI/Visualisation..)
  • Vous proposez et mettez en œuvre le déploiement et l’intégration continus de pipelines sur plusieurs paradigmes : serverless cloud (AWS Lambdas, Azure Functions, GC Functions, Kubernetes) ou cloud privé (OpenNebula, CloudStack, CephFS).

Pourquoi rejoindre l’aventure ?

  • Accompagnement par des experts : Nos Associés en charge de l’équipe Data Services possèdent une expertise rare, notamment en MLOps (CI/CD opérationnelle depuis 2013). Ils participent activement aux projets les plus innovants du cabinet et à la création de start-up technologiques acquises par Mazars. Cet environnement exigeant et formateur vous propulsera au sommet des bonnes pratiques en coding et en opérations, garantissant un delivery projet de qualité.
  • Autonomie et ambition : Vous évoluerez dans un écosystème jeune, dynamique et responsabilisant, avec de fortes ambitions de croissance. Impliquez-vous dans le développement du Lab Mazars et contribuez à la construction de notre offre de service en conseil data.
  • Hacking spirit : Nous valorisons une veille technologique constante, à la pointe des technologies open-source les plus performantes. Nos consultants se forment en permanence pour élargir leur socle de compétences.
  • Cabinet international : Rejoindre Forvis Mazars, c’est intégrer un cabinet aux dimensions internationales et bénéficier d’opportunités de carrière variées : bootcamp data, learning center de pointe (Forvis Mazars Academy, LinkedIn Learning, etc.) et mobilités internationales.
  • Partagez avec nous la fierté d’apporter des solutions pertinentes à nos clients. Vous vous surpasserez sur des sujets techniques variés et ambitieux, au sein d’une équipe humaine et bienveillante !

Qualifications

Nous recherchons des candidats passionnés par la data, stagiaire de fin d'études avec une formation Bac+5 (école d’ingénieur ou 3ème cycle dans un domaine connexe à la data). Vous avez montré un intérêt pour le développement applicatif intégrant une composante Data à travers des stages, cours ou projets personnels.

Compétences requises :

  • Langages de programmation analytique (Python, R, Haskell, Rust, etc.)
  • Couches de persistance (MySQL, MongoDB, ElasticSearch, S3, Neo4j)
  • Expérience avec Linux (Ubuntu, Debian, CentOS) et systèmes de contrôle de version (Git)
  • Familiarité avec les chaînes d’intégration continue

Compétences supplémentaires appréciées :

  • Qualité logicielle et DevOps (GitLab, Ansible, Docker, Terraform)
  • Interaction avec des équipes fullstack (RestAPI, VueJS, ReactNative)
  • Expérience avec des fournisseurs de cloud (Azure, AWS, GCP)
  • Chaîne d’analyse prédictive (scikit-learn, TensorFlow)
  • Outils de Business Intelligence (Power BI, Qlik, Tableau)

Vous êtes curieux(se), autonome, entreprenant(e) et faites preuve d’initiatives. Vous maîtrisez l’anglais oral et écrit.

Vous serez basé(e) à Paris, avec des déplacements possibles en province et à l’étranger.

See more jobs at MAZARS

Apply for this job

+30d

Cloud Systems Engineer (Mid-level)

DevOPSterraformmobileansibleazuregitc++ubuntulinuxpythonAWS

Signify Health is hiring a Remote Cloud Systems Engineer (Mid-level)

How will this role have an impact?

Signify Health is seeking a driven Cloud Systems Engineer to join our Cloud Engineering organization. Reporting to the Manager of Cloud Operations, this role is ideal for individuals looking to build on their cloud infrastructure expertise and contribute to the automation, management, and optimization of cloud environments. We encourage candidates who are eager to grow, collaborate across teams, and leverage modern tools and technologies to enhance cloud operations.

Key Responsibilities

  • Cloud Infrastructure Support: Assist in designing, implementing, and managing secure, scalable, and highly available cloud infrastructure.
  • Automation & Infrastructure as Code (IaC): Use tools like Terraform, Ansible, and Git to automate cloud provisioning, configuration, and management. Help build and maintain automated workflows.
  • Security & Compliance:Support security best practices in cloud environments, ensuring adherence to internal security policies and industry compliance standards.
  • Documentation & Knowledge Sharing: Contribute to maintaining comprehensive documentation for infrastructure, automation processes, and operational procedures to support cross-team knowledge sharing.
  • Cross-Functional Collaboration:Work with teams across engineering, SRE, and security to ensure cloud solutions meet business requirements and align with best practices.
  • Performance & Cost Optimization: Support initiatives focused on cloud performance improvements and cost optimization, using monitoring tools and data-driven insights.
  • On-Call Rotation:Participate in on-call rotations, ensuring the reliability and availability of cloud infrastructure and responding to incidents as required.

What You’ll Need

Experience:

  • 1-2 years in a Cloud Engineer, DevOps, or related role, supporting cloud infrastructure in production environments.
  • Hands-on experience with Infrastructure as Code (IaC) tools like Terraform and Ansible, and using Git for version control and collaboration.
  • Exposure to CI/CD pipelines and cloud platforms like Azure, AWS, or GCP.

Technical Skills:

  • Experience in managing Linux (RedHat, Ubuntu) and/or Windows Server, including configuration, optimization, and troubleshooting.
  • Basic understanding of networking protocols (e.g., TCP/IP, DNS, HTTP) and their applications in cloud environments.
  • Proficiency in scripting with languages like PowerShell, Bash, or Python to automate cloud operations.
  • Experience with cloud monitoring tools (e.g., New Relic, Prometheus) for infrastructure performance and alerting.
  • Nice-to-Have: Experience with Go for scripting and automation tasks.
  • Nice-to-Have: Experience with Ansible AWX for configuration management.
  • Nice-to-Have: Familiarity with Git for version control and collaborative development.

  Soft Skills:

  • Strong teamwork and collaboration skills, able to work effectively within cross-functional teams to deliver results.
  • A proactive learner, with an interest in cloud technologies and a desire to continuously improve skills.
  • Excellent communication skills, with the ability to document processes clearly and share knowledge across teams.

The base salary hiring range for this position is $72,100 to $125,600. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits.
In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  Eligible employees may enroll in a full range of medical, dental, and vision benefits, 401(k) retirement savings plan, and an Employee Stock Purchase Plan.  We also offer education assistance, free development courses, paid time off programs, paid holidays, a CVS store discount, and discount programs with participating partners.  

About Us:

Signify Health is helping build the healthcare system we all want to experience by transforming the home into the healthcare hub. We coordinate care holistically across individuals’ clinical, social, and behavioral needs so they can enjoy more healthy days at home. By building strong connections to primary care providers and community resources, we’re able to close critical care and social gaps, as well as manage risk for individuals who need help the most. This leads to better outcomes and a better experience for everyone involved.

Our high-performance networks are powered by more than 9,000 mobile doctors and nurses covering every county in the U.S., 3,500 healthcare providers and facilities in value-based arrangements, and hundreds of community-based organizations. Signify’s intelligent technology and decision-support services enable these resources to radically simplify care coordination for more than 1.5 million individuals each year while helping payers and providers more effectively implement value-based care programs.

To learn more about how we’re driving outcomes and making healthcare work better, please visit us at www.signifyhealth.com

Diversity and Inclusion are core values at Signify Health, and fostering a workplace culture reflective of that is critical to our continued success as an organization.

We are committed to equal employment opportunities for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.

See more jobs at Signify Health

Apply for this job

+30d

Développeur Software Craftsmanship - H/F

TalanParis, France, Remote
DevOPSTDDterraformansibleazureqagitjavac++dockertypescriptangularAWSjavascriptbackendfrontend

Talan is hiring a Remote Développeur Software Craftsmanship - H/F

Description du poste

Vous interviendrez dans les équipes de notre Digital Factory, en tant que développeur craftsman sur des architectures Cloud, en environnement CI/CD pour mettre en œuvre vos connaissances sur la stack technique suivante :

  • Backend : Java, Sprint Boot, Spring Cloud
  • Front : TypeScript ou JavaScript 
  • Testing : TDD, BDD (Cucumber)
  • CI/CD : DevOps et déploiement Cloud (Azure ou AWS), Git, Ansible / Terraform, SonarQube, Apache, Maven, Gradle, Docker

Vous serez amené à :

  • Collaborer avec les utilisateurs finaux afin de comprendre leurs besoins et développer une vision commune du produit
  • Participer avec le QA et le PO à la rédaction des BDD
  • Participer aux différentes cérémonies agiles
  • Participer avec le techlead aux différentes phases d’analyse et de conception technique
  • Implémenter les différentes tâches tout en respectant les principes du clean code
  • Participer aux revues de codes et proposer des améliorations de la qualité du code et de l'architecture
  • Préparer et présenter les démos aux utilisateurs finaux
  • Mettre en place des stratégies de testing
  • Travailler sur l’amélioration continue des pratiques de craftsmanship et de clean code de l’équipe

Mais aussi, vous aurez à :

  • Agir en qualité de référent technique et aider les plus juniors dans leurs tâches techniques
  • Participer/Animer nos "ruches" hebdomadaires : des présentations de sujets techniques pour une veille technologique collaborative

Qualifications

  • Vous avez minimum 3 ans d’expérience en développement dans des environnements Java/JEE backend avec une première expérience en frontend (Angular, React ou VueJs) dans un contexte agile.
  • Vous aimez la complexité technique et avez envie d’intégrer une structure Great Place to Work pour la 9ème année consécutive qui vous accompagne dans l’évolution de votre carrière sur les volets techniques et humains !

Processus de recrutement:

L’équipe recrutement s’engage à vous proposer un processus de recrutement rapide et fluide:

  • Un premier entretien RH de 30/45 min en visio avec le recruteur pour vous présenter le poste et comprendre votre projet professionnel
  • 2 entretiens (dont au moins 1 dans nos locaux) :
    • un échange technique : pas de test technique à faire seul chez vous, nous codons ensemble pour comparer nos pratiques,
    • un échange avec votre futur manager

 

See more jobs at Talan

Apply for this job

+30d

DevOps Mühendisi

LostarSakarya, Turkey, Remote
DevOPSDjangoterraformansibleazuredockerkuberneteslinuxjenkinspythonAWS

Lostar is hiring a Remote DevOps Mühendisi

İş Tanımı

Dikkat: İlanı sonuna kadar okuduktan sonra başvuru yaparsanız başarı şansınız yükselir.

Şirketimizde DevOps süreçlerini yönetmek ve geliştirmek üzere bir DevOps Mühendisi arıyoruz. Bu pozisyonda, yazılım geliştirme ekipleriyle iş birliği yaparak CI/CD süreçlerinin tasarımını ve uygulanmasını sağlayacak, bulut altyapılarını yönetecek ve operasyonel süreçleri iyileştirmek için otomasyon çözümleri geliştireceksiniz.

Sorumluluklar:

  • CI/CD (Continuous Integration/Continuous Deployment) pipeline'larını kurmak ve optimize etmek.
  • Bulut ortamında (AWS, Azure, GCP vb.) altyapı yönetimi ve bakımını gerçekleştirmek.
  • Otomasyon araçları ve betikleri geliştirmek (Terraform, Ansible, Jenkins, vb.).
  • Uygulama performansı, izleme ve güvenlik süreçlerini optimize etmek.
  • Operasyonel süreçlerde sürekli iyileştirme ve otomasyon sağlamak.
  • Mikroservis mimarisi ve konteyner yönetimi (Docker, Kubernetes) konusunda çözümler geliştirmek.

Nitelikler

  • DevOps, sistem yönetimi veya yazılım mühendisliği alanında tecrübe.
  • CI/CD araçları (BitBucket, GitLab CI, CircleCI vb.) konusunda deneyim.
  • Bulut servisleri (AWS, Azure, GCP) hakkında bilgi sahibi.
  • Konteyner yönetimi (Docker, Kubernetes) konusunda uzmanlık.
  • Otomasyon araçları (Terraform, Ansible vb.) ile çalışmış.
  • Linux işletim sistemi ve komut satırı (bash, shell scripting) konusunda tecrübeli.
  • Güçlü problem çözme yeteneği ve iş birliği becerisi.
  • Tercihen yazılım geliştirme süreçleri (Python, Django vb.) hakkında temel bilgi.
  • Orta/iyi seviyede İngilizce bilgisi.
  • Türk vatandaşı olmak.
  • Başvuru sırasında özgeçmişiniz ile birlikte ekibimize nasıl katkı verebileceğinizi düzyazı ile anlatan yaklaşık 3000 harf uzunluğunda bir ön yazıyı da bekliyoruz.
  • Ön yazı olmayan başvurular değerlendirmeye alınmayacaktır. 

See more jobs at Lostar

Apply for this job

+30d

DevOps Engineer

SmartMessageİstanbul, TR - Remote
DevOPSBachelor's degree3 years of experienceterraformDesignansibleazuredockerkuberneteslinuxjenkins

SmartMessage is hiring a Remote DevOps Engineer

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are looking for a DevOps Engineer to take part in our team.

What you’ll be responsible:

  • You will lead all phases of deployment operations including the installation and automation of solutions for site availability
  • To supply taking applications in Kubernetes productions systems to live,
  • To help fixing application performance issues on Kubernetes systems,
  • To carry out application migrations to Kubernetes systems,
  • To observe and supervise systems running on production and tackle their problems in a repeatable manner,
  • Develop and maintain design and troubleshooting documentation.
  • Develop internal solutions and apply best practices to further improve and automate site reliability
  • Support and develop continuous delivery and integration applications in collaboration with our development team
  • Improve communication between development and operations teams, fix defects in earlier phases of development

    We are looking for a passionate talent who has;

    • Bachelor's degree in Computer Science or related technical field, or equivalent practical experience,
    • Detailed problem-solving approach, coupled with effective interpersonal skills and a sense of drive,
    • Configuration management systems such as Ansible or Terraform,
    • Performance analysis and debugging in Linux environment and/or Kubernetes,
    • Experience with Unix/Linux operating systems internals (e.g. filesystems, system calls), and with networking or cloud systems,
    • Experience analyzing and troubleshooting systems,
    • Experience with container orchestration with Kubernetes,
    • Implement automation tools and frameworks (CI/CD pipelines),
    • An understanding of OS and distributed systems concepts, network concepts (OSI model, and etc.)
    • Knowledge of SDLC and DevOps Concepts
    • In-depth knowledge designing, building and maintaining CI and CD pipelines
    • Experience with Jenkins, Octopus, Azure DevOps, Docker, Kubernetes and a solid understanding of security practices
    • Configuring and supporting windows and Linux based servers and applications
    • Familiar with Git/Gitlab branching models
    • Develop and maintain automation tools to reduce manual operational tasks
    • Min 3 years of experience

    Join our team!

    See more jobs at SmartMessage

    Apply for this job

    +30d

    Senior Engineer, Platform

    ATPCO1Herndon, VA, Remote
    DevOPSLambdaterraformDesignansibleazuredockerkubernetesjenkinspythonAWS

    ATPCO1 is hiring a Remote Senior Engineer, Platform

    Job Description

    About the Role:

    We are looking for an experienced and visionary Senior Platform Engineer to lead the development and optimization of our AWS cloud infrastructure. In this role, you will leverage your deep expertise in cloud technologies, particularly AWS, to drive architectural decisions, automate infrastructure, and implement cutting-edge solutions to improve platform reliability, scalability, and performance. As a senior member of the team, you will collaborate closely with leadership, mentor junior engineers, and play a key role in shaping the future of our cloud strategy.

    Key Responsibilities:

    · Lead the design and implementation of highly available, scalable, and secure cloud infrastructure in AWS, ensuring alignment with business needs and best practices.

    · Architect cloud solutions that are optimized for performance, security, and cost-efficiency, including multi-region, multi-account, and hybrid cloud architectures.

    · Drive automation initiatives, building robust Infrastructure as Code (IaC) frameworks using tools like AWS CloudFormation, Terraform, or Ansible to automate the provisioning, scaling, and monitoring of cloud resources.

    · Provide technical leadership and mentorship to platform engineers, guiding them in cloud best practices, security, DevOps methodologies, and problem-solving approaches.

    · Collaborate with cross-functional teams including product, development, and security to ensure seamless integration of services and applications with the cloud infrastructure.

    · Oversee the implementation and management of CI/CD pipelines, ensuring fast, reliable, and automated deployment processes.

    · Lead complex troubleshooting and performance tuning efforts for cloud infrastructure and applications, ensuring the highest levels of reliability and performance.

    · Implement security best practices for cloud infrastructure, including monitoring, logging, and incident response, working closely with security and compliance teams to maintain a secure cloud environment.

    · Evaluate and integrate new AWS services and emerging cloud technologies to enhance platform capabilities and drive continuous improvement.

    · Develop and enforce governance models for cloud environments, ensuring compliance with regulatory requirements and organizational policies.

    · Act as the subject matter expert (SME) on AWS, providing guidance to executives and stakeholders on cloud strategy and infrastructure investments.

     

    Qualifications:

    · 10+ years of experience in cloud infrastructure design and management, with a strong focus on AWS.

    · Expertise in architecting, implementing, and managing complex cloud environments, including large-scale, distributed systems.

    · Proven experience withInfrastructure as Code (IaC), particularly with AWS CloudFormation, Terraform, or Ansible.

    · Strong experience withLinux/Unix systems and automation through scripting languages such as Python, Bash, etc.

    · In-depth knowledge ofnetworking, security, and cloud architecture best practicesin AWS environments, including VPC design, IAM, and encryption.

    · Experience implementing and optimizing CI/CD pipelines with tools like Jenkins, AWS CodePipeline, or GitLab CI.

    · Advanced proficiency withmonitoring, logging, and observability tools (CloudWatch, Datadog, Prometheus) to ensure infrastructure health and performance.

    · Leadership and mentoring experience, with a proven ability to guide junior engineers and promote a culture of continuous learning.

    · Excellent problem-solving and troubleshooting skills, with experience in root cause analysis of complex cloud issues.

    · Strong collaboration and communication skills, with the ability to work with both technical and non-technical stakeholders.

    Preferred Skills (Nice to Have):

    · AWS Certified Solutions Architect (Professional), AWS Certified DevOps Engineer (Professional), or other advanced AWS certifications.

    · Deep experience with Kubernetes, Docker, and container orchestration in the cloud.

    · Expertise in multi-cloud or hybrid cloud strategies, including working with both AWS and other cloud platforms like Azure or Google Cloud.

    · Experience with service mesh architectures (e.g., Istio, Linkerd) and microservices deployment in AWS.

    · Hands-on experience withsecurity compliance frameworks (e.g., SOC2, HIPAA, ISO27001) and implementing security controls in cloud environments.

    · Familiarity with serverless architectures using AWS Lambda and other AWS serverless services.

     

    What We Offer:

    · Opportunity to lead impactful projects on a large-scale AWS cloud infrastructure.

    · Leadership role in the platform team, with the ability to shape cloud strategies and influence company-wide cloud adoption.

    · Competitive compensation package, including bonuses and stock options.

    · Continued access to AWS certification programs and advanced training resources.

    · A collaborative environment that encourages innovation, ownership, and professional growth.

    · Flexible work arrangements, including remote work options.

    Salary Range: $126,000 - $140,000

    *The disclosed range estimate has not been adjusted for applicable geographic differential associated with the location*

    Qualifications

    See more jobs at ATPCO1

    Apply for this job

    +30d

    Principal Engineer - Full Stack, Big Data

    agileterraformsqlDesignansibleapigitjenkins

    Integral Ad Science is hiring a Remote Principal Engineer - Full Stack, Big Data

    Integral Ad Science (IAS) is a leading global media measurement and optimization platform that delivers the industry’s most actionable data to drive superior results for the world’s largest advertisers, publishers, and media platforms. IAS’s software provides comprehensive and enriched data that ensures ads are seen by real people in safe and suitable environments, while improving return on ad spend for advertisers and yield for publishers. Our mission is to be the global benchmark for trust and transparency in digital media quality. For more information, visit integralads.com.

    As a *Principal Full Stack Engineer, you will provide technical leadership and expertise to help build ad verification, analytics, and ad fraud solutions that deliver on the team’s mission of helping advertisers understand the quality of the ad opportunities they’re acquiring. Our team provides the tools that advertisers need to maximize their ROI by displaying the recommended targets for their investments.

    The ideal candidate has a track record of architecting and building end-to-end software solutions, enjoys working in a collaborative and agile environment, and brings innovative solutions to complex problems with a desire to improve the status quo. 

    (*Please note: at IAS, Principal Engineer is an IC6 level position)

    What you’ll do:

    • Architect, design, build and integrate our core ad analytics and ad fraud products end to end 

    • Lead the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation, in addition to mentoring the team

    • Partner with the Product team and other stakeholders across the company to understand product requirements, gather business and technical requirements from broadcaster clients, and research and develop solutions, including API development

    • Manage multiple competing priorities in a fast-paced, exciting, collaborative environment

    • Build and maintain high-performance, fault-tolerant and scalable distributed systems that can handle our massive scale

    • Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency)

    • Automate cloud infrastructure, services, and observability

    • Develop CI/CD pipelines and testing automation

    • Establish and uphold best engineering practices through thorough code and design reviews and improved processes

    • Groom junior engineers through mentoring and delegation

    Who you are and what you have:

    • BS/MS in Computer Science, or related STEM degree

    • 12-15+ years of hands-on software development experience

    • Solid overall programming skills, able to write modular, maintainable code in Java/Python & SQL

    • Expert understanding of SQL, dimensional modeling, and at least one relational database including solid contribution to ERD

    • Solid proficiency with automation frameworks/tools like Git, Jenkins, Ansible, and Cloudformation (or Terraform)

    • Solid Proficiency with containers and infrastructure-as-code fundamentals

    • Solid Proficiency with Amazon Web Services

    • Good Understanding of Data Engineering and related frameworks

    • Good Understanding of front end frameworks like ReactJS/AngularJS

    • Familiarity with MVC, SOA, Restful Web services

    What puts you over the top:

    • Experience working with Databricks

    • Prior experience in an external client and/or vendor facing role

    • Experience with managing, leading and/or mentoring a development team 

    • Experience with big data and data pipelines

    • Experience working with audio or video technology

    • Experience in AdTech

     

    New York Applicants: The salary range for this position is $208,600 - $357,600. Actual pay may vary based on experience or geographic location.

    About Integral Ad Science

    Integral Ad Science (IAS) is a leading global media measurement and optimization platform that delivers the industry’s most actionable data to drive superior results for the world’s largest advertisers, publishers, and media platforms. IAS’s software provides comprehensive and enriched data that ensures ads are seen by real people in safe and suitable environments, while improving return on ad spend for advertisers and yield for publishers. Our mission is to be the global benchmark for trust and transparency in digital media quality. For more information, visit integralads.com.

    Equal Opportunity Employer:

    IAS is an equal opportunity employer, committed to our diversity and inclusiveness. We will consider all qualified applicants without regard to race, color, nationality, gender, gender identity or expression, sexual orientation, religion, disability or age. We strongly encourage women, people of color, members of the LGBTQIA community, people with disabilities and veterans to apply.

    California Applicant Pre-Collection Notice:

    We collect personal information (PI) from you in connection with your application for employment or engagement with IAS, including the following categories of PI: identifiers, personal records, commercial information, professional or employment or engagement information, non-public education records, and inferences drawn from your PI. We collect your PI for our purposes, including performing services and operations related to your potential employment or engagement. For additional details or if you have questions, contact us at compliance@integralads.com.

    To learn more about us, please visithttp://integralads.com/ 

    Attention agency/3rd party recruiters: IAS does not accept any unsolicited resumes or candidate profiles. If you are interested in becoming an IAS recruiting partner, please send an email introducing your company to recruitingagencies@integralads.com. We will get back to you if there's interest in a partnership.

    #LI-Remote

    See more jobs at Integral Ad Science

    Apply for this job

    +30d

    Staff Site Reliability Engineer

    AcquiaRemote - Costa Rica
    DevOPS9 years of experience6 years of experience3 years of experienceterraformdrupalDesignansibleazurerubyjavakubernetesjenkinspythonAWSPHP

    Acquia is hiring a Remote Staff Site Reliability Engineer

    Acquia empowers the world’s most ambitious brands to create digital customer experiences that matter. With open source Drupal at its core, the Acquia Digital Experience Platform (DXP) enables marketers, developers, and IT operations teams at thousands of global organizations to rapidly compose and deploy digital products and services that engage customers, enhance conversions, and help businesses stand out.

    Headquartered in the U.S., Acquiais positioned as a market leader by the analyst community and is listed as one of the world’s top software companies by The Software Report. We are Acquia. We are a global company with employees located in more than 30 countries, and we’re building for the future.We want you to be a part of it!

    About the role:

    As a Staff Site Reliability Engineer, you will be a key player in designing, implementing, and maintaining our CI/CD pipelines, cloud infrastructure, and monitoring solutions. Your expertise in tools like ArgoCD, Kubernetes, and cloud-native architecture will help us achieve operational excellence at scale. You will work closely with engineering teams to ensure they have the right infrastructure in place to deploy rapidly, safely, and reliably.

    This is a hands-on role for someone who thrives in an environment where automation is the goal, reliability is the baseline, and scalability is second nature. You won’t just be maintaining systems—you’ll be innovating, designing new ways to make our infrastructure smarter and our development faster.

    Job Responsibilities: 

    • CI/CD Pipeline Mastery: Design, build, and optimize continuous integration and continuous deployment (CI/CD) pipelines using ArgoCD, Jenkins, or similar tools. Ensure zero-downtime, fully automated deployment pipelines.
    • Infrastructure as Code (IaC): Build and manage scalable, reliable infrastructure using Terraform, Kubernetes, and other IaC tools. Ensure everything is automated—from deployments to monitoring—so that infrastructure becomes a self-service platform.
    • Cloud Expertise: Architect and manage cloud environments (AWS, GCP, or Azure), focusing on cost optimization, scalability, and performance. Implement disaster recovery, fault tolerance, and high availability strategies.
    • Monitoring and Alerting: Implement comprehensive monitoring solutions using Prometheus, Grafana, ELK, and Datadog to detect and resolve performance bottlenecks before they impact customers. Design and implement automated alerts for proactive system health monitoring.
    • DevOps Advocacy: Champion the culture of DevOps across teams—promote best practices, encourage adoption of new technologies, and drive a continuous learning mindset within the engineering teams. Be the go-to person for CI/CD, infrastructure scaling, and deployment automation.
    • SRE Mindset: Focus on building systems that are resilient by design, automating processes that improve reliability, and implementing Service Level Objectives (SLOs) to align engineering efforts with operational goals.
    • Security-First Approach: Collaborate with security teams to implement robust security practices, from container security to infrastructure hardening. Automate security checks within the pipeline for compliance and vulnerability management.
    • Collaboration with Engineering Teams: Work hand-in-hand with product development teams to understand their needs, integrate CI/CD practices into their workflows, and provide a fast, reliable, and secure path from code to production.

    Skills:

    • BS in Computer Science or a comparable field of study, or equivalent practical experience.
    • Experience working with one or more of: Go, Python, Ruby, PHP, Java or Javascript. 
    • Experience with Unix/Linux systems administration using the CLI.
    • Fundamental understanding of TCP/UDP networking concepts
    • Solid oral and written communications skills.
    • CI/CD Expertise: Extensive hands-on experience with CI/CD tools such as ArgoCD, Jenkins, CircleCI, or GitLab CI. Ability to design and implement pipelines that ensure rapid, reliable deployments.
    • Kubernetes Guru: Strong understanding and experience with Kubernetes, Helm, and container orchestration. Ability to scale and manage microservices in production.
    • Cloud Mastery: Proficient in at least one major cloud provider—AWS, GCP, or Azure. Experience with multi-cloud or hybrid-cloud architecture is a plus.
    • IaC Champion: Proficiency in Terraform, Ansible, or CloudFormation to manage infrastructure as code. Familiarity with GitOps workflows and version-controlled infrastructure.
    • Monitoring & Observability: Strong experience with monitoring tools like Prometheus, Grafana, Datadog, ELK, or New Relic. Ability to build custom dashboards and alerting systems.
    • Security-Focused: Deep understanding of security best practices in DevOps, including container security, CI/CD pipeline security, and cloud infrastructure hardening.
    • Problem Solver: Excellent troubleshooting skills with the ability to diagnose issues across a variety of environments, from code to infrastructure.
    • Collaboration Skills: Ability to work effectively in cross-functional teams, influencing peers and driving adoption of best practices across the organization.

    Preferred Qualifications: 

    • 8-13 years of hands-on experience as a DevOps Engineer, SRE, or related role in a cloud-native environment.
    • Proven experience mentoring junior team-members. 
    • Deep knowledge of CI/CD pipelines, especially using ArgoCD or similar tools.
    • Proven expertise in cloud platforms (AWS, GCP, Azure), with experience building and managing scalable, reliable infrastructure.
    • Strong coding skills in Python, Go, or Ruby.
    • Experience with service mesh architectures like Istio or Linkerd is a plus.
    • SRE Certification (or equivalent experience) is a bonus.
    • Certified Kubernetes Administrator (CKA) is preferred.
    • A passion for automation, observability, and reliability.

    All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.

    See more jobs at Acquia

    Apply for this job

    +30d

    Senior Site Reliability Engineer

    AcquiaRemote - Costa Rica
    DevOPS9 years of experience6 years of experience3 years of experienceterraformdrupalDesignansibleazurerubyjavakubernetesjenkinspythonAWSPHP

    Acquia is hiring a Remote Senior Site Reliability Engineer

    Acquia empowers the world’s most ambitious brands to create digital customer experiences that matter. With open source Drupal at its core, the Acquia Digital Experience Platform (DXP) enables marketers, developers, and IT operations teams at thousands of global organizations to rapidly compose and deploy digital products and services that engage customers, enhance conversions, and help businesses stand out.

    Headquartered in the U.S., Acquiais positioned as a market leader by the analyst community and is listed as one of the world’s top software companies by The Software Report. We are Acquia. We are a global company with employees located in more than 30 countries, and we’re building for the future.We want you to be a part of it!

    About the role:

    As a Senior Site Reliability Engineer, you will be a key player in designing, implementing, and maintaining our CI/CD pipelines, cloud infrastructure, and monitoring solutions. Your expertise in tools like ArgoCD, Kubernetes, and cloud-native architecture will help us achieve operational excellence at scale. You will work closely with engineering teams to ensure they have the right infrastructure in place to deploy rapidly, safely, and reliably.

    This is a hands-on role for someone who thrives in an environment where automation is the goal, reliability is the baseline, and scalability is second nature. You won’t just be maintaining systems—you’ll be innovating, designing new ways to make our infrastructure smarter and our development faster.

    Job Responsibilities: 

    • CI/CD Pipeline Mastery: Design, build, and optimize continuous integration and continuous deployment (CI/CD) pipelines using ArgoCD, Jenkins, or similar tools. Ensure zero-downtime, fully automated deployment pipelines.
    • Infrastructure as Code (IaC): Build and manage scalable, reliable infrastructure using Terraform, Kubernetes, and other IaC tools. Ensure everything is automated—from deployments to monitoring—so that infrastructure becomes a self-service platform.
    • Cloud Expertise: Architect and manage cloud environments (AWS, GCP, or Azure), focusing on cost optimization, scalability, and performance. Implement disaster recovery, fault tolerance, and high availability strategies.
    • Monitoring and Alerting: Implement comprehensive monitoring solutions using Prometheus, Grafana, ELK, and Datadog to detect and resolve performance bottlenecks before they impact customers. Design and implement automated alerts for proactive system health monitoring.
    • DevOps Advocacy: Champion the culture of DevOps across teams—promote best practices, encourage adoption of new technologies, and drive a continuous learning mindset within the engineering teams. Be the go-to person for CI/CD, infrastructure scaling, and deployment automation.
    • SRE Mindset: Focus on building systems that are resilient by design, automating processes that improve reliability, and implementing Service Level Objectives (SLOs) to align engineering efforts with operational goals.
    • Security-First Approach: Collaborate with security teams to implement robust security practices, from container security to infrastructure hardening. Automate security checks within the pipeline for compliance and vulnerability management.
    • Collaboration with Engineering Teams: Work hand-in-hand with product development teams to understand their needs, integrate CI/CD practices into their workflows, and provide a fast, reliable, and secure path from code to production.

    Skills:

    • BS in Computer Science or a comparable field of study, or equivalent practical experience.
    • Experience working with one or more of: Go, Python, Ruby, PHP, Java or Javascript. 
    • Experience with Unix/Linux systems administration using the CLI.
    • Fundamental understanding of TCP/UDP networking concepts
    • Solid oral and written communications skills.
    • CI/CD Expertise: Extensive hands-on experience with CI/CD tools such as ArgoCD, Jenkins, CircleCI, or GitLab CI. Ability to design and implement pipelines that ensure rapid, reliable deployments.
    • Kubernetes Guru: Strong understanding and experience with Kubernetes, Helm, and container orchestration. Ability to scale and manage microservices in production.
    • Cloud Mastery: Proficient in at least one major cloud provider—AWS, GCP, or Azure. Experience with multi-cloud or hybrid-cloud architecture is a plus.
    • IaC Champion: Proficiency in Terraform, Ansible, or CloudFormation to manage infrastructure as code. Familiarity with GitOps workflows and version-controlled infrastructure.
    • Monitoring & Observability: Strong experience with monitoring tools like Prometheus, Grafana, Datadog, ELK, or New Relic. Ability to build custom dashboards and alerting systems.
    • Security-Focused: Deep understanding of security best practices in DevOps, including container security, CI/CD pipeline security, and cloud infrastructure hardening.
    • Problem Solver: Excellent troubleshooting skills with the ability to diagnose issues across a variety of environments, from code to infrastructure.
    • Collaboration Skills: Ability to work effectively in cross-functional teams, influencing peers and driving adoption of best practices across the organization.

    Preferred Qualifications: 

    • 5-9 years of hands-on experience as a DevOps Engineer, SRE, or related role in a cloud-native environment.
    • Proven experience mentoring junior team-members. 
    • Deep knowledge of CI/CD pipelines, especially using ArgoCD or similar tools.
    • Proven expertise in cloud platforms (AWS, GCP, Azure), with experience building and managing scalable, reliable infrastructure.
    • Strong coding skills in Python, Go, or Ruby.
    • Experience with service mesh architectures like Istio or Linkerd is a plus.
    • SRE Certification (or equivalent experience) is a bonus.
    • Certified Kubernetes Administrator (CKA) is preferred.
    • A passion for automation, observability, and reliability.

    All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.

    See more jobs at Acquia

    Apply for this job

    +30d

    Senior DevOps Engineer (Linux, K8s, Any Scripting Language)

    AcquiaRemote - India
    DevOPSredis9 years of experience6 years of experience3 years of experienceterraformsqldrupalansibleazurerubypostgresqlMySQLkuberneteslinuxjenkinspythonAWS

    Acquia is hiring a Remote Senior DevOps Engineer (Linux, K8s, Any Scripting Language)

    Acquia empowers the world’s most ambitious brands to create digital customer experiences that matter. With open source Drupal at its core, the Acquia Digital Experience Platform (DXP) enables marketers, developers, and IT operations teams at thousands of global organizations to rapidly compose and deploy digital products and services that engage customers, enhance conversions, and help businesses stand out.

    Headquartered in the U.S., Acquia is a Great Place to Work-CertifiedTM company in India, is listed as one of the world’s top software companies by The Software Report, and is positioned as a market leader by the analyst community. We are Acquia. We are building for the future and we want you to be a part of it!

    The Opportunity

    The Senior DevOps Engineer is responsible for designing and delivering secure and highly available solutions. You will be a critical part of a team focused on ensuring our services are ready and stress tested. You should be comfortable taking on new challenges, defining potential solutions and implementing designs in a team environment. You will be working on a tech stack composed of Linux, Kubernetes, Ruby, Go-lang, Python, pgSQL, MySQL, Redis, Jenkins, Github and GCP.

    You'll Spend Time:

    • Partnering closely with Engineering and Support.
    • We are responsible for the deployment, and continuous operation of the Monsido platform.
    • Making sure we automate as many tasks as possible to make diagnostics, scaling, healing and deployments a breeze.
    • Working on a team responsible for a blend of architecture, automation, development, and application administration.
    • Developing and deploy solutions from the infrastructure, to the network, and application layers, on public cloud platforms.
    • Ensuring our SaaS platform is available and performing, and that we can notice problems before our customers.
    • Collaborating with Support and Engineering on customer issues, as needed.
    • Working with distributed data infrastructure, including containerization and virtualization tools, to enable unified engineering and production environments;
    • Developing dashboards, monitors, and alerts to increase situational awareness of the state of our production issues/sla/security incidents.Independently conceiving and implementing ways to improve development efficiency, code reliability, and test fidelity.
    • Participating in on-call rotation

    You'll be Successful if You:

    • Proficient with Unix/Linux OS administration (5-8 years)
    • Proficient with computer network setup and debugging
    • Proficient with at least one scripting language (Shell, Python, …)
    • Competentwith deploying, tuning, and maintaining Linux-based, highly available, fault-tolerant platforms in public cloud providers such as GCP, AWS or Azure
    • Competent with Kubernetes, like configuration management, running deployments , debugging etc. 
    • Competent with application containerization
    • Basic understanding with SQL and relational database administration (PostgreSQL, MySQL)
    • Basic understanding with configuration management like terraform, Saltstack etc.
    • Flexible working in rotational On-Calls.

    Requirements & Suggested Years of Experience:

    • DevOps and/or build & release experience including delivery: +3 years
    • Software Configuration Management tools like Puppet, Saltstack, Chef, Ansible : +2 years
    • Application monitoring tools: +2 years
    • Experience with Kubernetes and containerization +1 year

    Extra credit:

    • Best practices in infosec.
    • The ability to dig deep into infrastructure and code to solve problems.
    • The drive to solve traditional operations problems through automation.
    • High attention to detail.

    Individuals seeking employment at Acquia are considered without regard to race, color, religion, caste, creed, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation. Whatever you answer will not be considered in the hiring process or thereafter.

    See more jobs at Acquia

    Apply for this job

    +30d

    DevOps Engineer - SysAdmin

    DevoteamTunis, Tunisia, Remote
    DevOPSMemcachedredisterraformansiblemongodbazureapigitdockerelasticsearchpostgresqlMySQLkuberneteslinuxjenkins

    Devoteam is hiring a Remote DevOps Engineer - SysAdmin

    Description du poste

    • Administrer la plateforme au niveau N3, en assurant une gestion proactive des incidents et la maintenance de l'infrastructure.
    • Gérer l’intégration continue (CI/CD) des projets en production pour assurer des déploiements rapides et fiables.
    • Participer à la conception de l’architecture infra et applicative, en collaboration avec les équipes de développement.
    • Mettre en place et gérer des tests automatisés pour la validation des plateformes.
    • Assurer le monitoring et la supervision des environnements via des outils dédiés, garantissant la performance et la disponibilité des systèmes critiques.
    • Rédiger et maintenir à jour la documentation OPS.

    Compétences techniques (Must Have) :

    • Expertise en administration des systèmes Windows et Linux (particulièrement Redhat/Centos).
    • Expérience dans l’administration de serveurs web à fort trafic (Apache, Nginx, Varnish).
    • Solides compétences en administration de bases de données SQL/NoSQL (Mysql, PostgreSQL, OCI, Couchbase, Redis, Memcached, Elasticsearch, Cassandra, MongoDB).
    • Bonne connaissance des flux de données et des intégrations via Nifi, SFTP, API.
    • Maîtrise des langages de scripting (Powershell, Shell).
    • Compétences en Infrastructure Automation avec des outils tels qu’Ansible et Terraform.
    • Pratique des outils d’automatisation des déploiements et orchestration (Jenkins).

    Nice to have : 

    • Connaissance des outils de source control (Git, Bitbucket, GitLab).
    • Expérience avec des outils de dashboarding comme Grafana et Kibana.
    • Familiarité avec les containers (Docker) et leur orchestration (Kubernetes).
    • Compétences en cloud computing (Azure, Google Cloud).

    Qualifications

    • Vous êtes un(e) passionné(e) par les technologies DevOps et l’automatisation, avec une solide expérience dans la gestion des systèmes et des environnements complexes en production.
    • Vous avez minimum 3 ans d'expérience dans un environnement similaire.
    • Vous avez un esprit analytique, une grande capacité à résoudre les problèmes et une excellente réactivité face aux incidents critiques.

    See more jobs at Devoteam

    Apply for this job

    +30d

    DevOps Engineer (Based in Melbourne), KMS Solutions

    KMS TechnologyMelbourne, Australia, Remote
    DevOPSLambdaagileBachelor's degreeterraformansibledockerkubernetesAWS

    KMS Technology is hiring a Remote DevOps Engineer (Based in Melbourne), KMS Solutions

    Job Description

    As a DevOps Engineer you will be responsible for implementing application solutions both in the cloud as well as participating in technical research and development to enable continuing innovation within the DevOps space.

    • Implement scalable, resilient, and secure solutions in the public cloud, especially in AWS.

    • Participate in automation initiatives to streamline processes, improve efficiencies and reduce hosting cost

    • Work closely with Product Owner, Platform Team, Solution Architects and development teams for continuous improvement

    • Enhance and drive automation and "Infrastructure as Code" delivery

    • Deliver cloud projects in an Agile environment

    • Participating in technical discussion with existing & potential clients and internal teams,

    • Participate in research and development to deliver complex cloud-native solutions or on-premises

    • Ability to analyze and troubleshoot complex software and infrastructure issues, and develop tools/systems for task automation

    • BAU Support as needed for critical and escalated issue

    • Responsible for managing and upgrading DevOps toolsets.

    • Maintaining 100% automation coverage of core Insight build and deploy using cloud-native services and containers

    Qualifications

    • Bachelor's degree in computer science, information technology, engineering or equivalent.
    • At least +4 years’ experience working with cloud services like AWS.

    • Solid experience in designing and implementing complex DevOps solutions

    • Experience within cloud hosted environments such as Amazon AWS and Google GCP cloud, AWS Lambda and AWS Cloud Formation.

    • A good AWS DevOps development background

    • Experience in Terraform, Kubernetes, CI/CD Pipeline, Ansible, docker

    See more jobs at KMS Technology

    Apply for this job

    +30d

    Open Telekom Cloud Trainee (REF3335A)

    Deutsche Telekom IT SolutionsBudapest, Debrecen, Szeged, Pécs, Hungary, Remote
    ansibleopenstacklinuxpython

    Deutsche Telekom IT Solutions is hiring a Remote Open Telekom Cloud Trainee (REF3335A)

    Az állás leírása

    Részvétel az Open Telekom Cloud működtetésében 

    • Munkavégzés egy openstack alapú cloud technológiával, csapatokban, mint például.: Hardware, Images, Database & Orchestration stb.
    • Jegyek készítése és beérkező jegyek kezelése
    •  Platformhoz kapcsolódó változtatások végrehajtása
    •  Szoros együttműködés magyar és külföldi kollégákkal 
    •  Automatizálási feladatok
    •  Agilis munkakörnyezet 

    Képzettség

    • Linux operációs rendszer tudás
    • Alaptudás számítógépes hálózatok terén
    • Középfokú angol nyelvtudás
    • Pontos és precíz munkavégzés, a feladatok dokumentációját is beleértve 
    • Jó kommunikációs képesség
    • Nappali tagozatos, aktív hallgatói jogviszony (25 év alatt passzív is megfelel)
    • Legalább havi 120 óra vállalása


    Előnyök

    • Python, Bash, vagy más programozási nyelv ismerete 
    • Awx, Awx tower, Ansible alaptudás 

     

    See more jobs at Deutsche Telekom IT Solutions

    Apply for this job