terraform Remote Jobs

186 Results

+30d

AWS Cloud Architect

jiraterraformDesign.netdockerAWS

Proactive Dealer Solutions is hiring a Remote AWS Cloud Architect

AWS Cloud Architect - Proactive Dealer Solutions - Career Page

See more jobs at Proactive Dealer Solutions

Apply for this job

+30d

Senior Software Engineer, Cloud Backend

agileterraformnosqlDesignmongodbazureuiapijavatypescriptangularjenkinspythonAWSbackend

Evertz Microsystems Limited is hiring a Remote Senior Software Engineer, Cloud Backend

Senior Software Engineer, Cloud Backend - Evertz Microsystems Limited - Career Page

See more jobs at Evertz Microsystems Limited

Apply for this job

+30d

Software Engineer, Cloud Backend

agileterraformnosqlDesignmongodbazureuiapijavatypescriptangularjenkinspythonAWSbackend

Evertz Microsystems Limited is hiring a Remote Software Engineer, Cloud Backend

Software Engineer, Cloud Backend - Career Page

See more jobs at Evertz Microsystems Limited

Apply for this job

+30d

Senior Platform Engineer

4 years of experience2 years of experienceagile3 years of experienceterraformuijavadockertypescriptlinuxangularjenkinspythonAWS

Evertz Microsystems Limited is hiring a Remote Senior Platform Engineer

Senior Platform Engineer - Evertz Microsystems Limited - Career Page

See more jobs at Evertz Microsystems Limited

Apply for this job

+30d

Full Stack Engineer

terraformsqlapijavadockerpythonAWSbackend

Cerebral Staffing, LLC is hiring a Remote Full Stack Engineer

Full Stack Engineer - Cerebral Staffing, LLC - Career Page

See more jobs at Cerebral Staffing, LLC

Apply for this job

+30d

(Internal) Cloud Developer

Nordcloud FinlandHelsinki, FI; Jyväskylä, FI; Salo, FI; Oulu, FI; Kuopio, FI Remote
agileterraformsqlDesignazurepythonAWS

Nordcloud Finland is hiring a Remote (Internal) Cloud Developer

We are digital builders born in the cloud and currently, we are looking forCloud Developer.

Joining Nordcloud is the chance of a lifetime to leave your mark on the European IT industry! We use an agile, cloud-native approach to empower clients to seize the full potential of the public cloud.

Your daily work:

  • Design and develop applications and integrations that help our business do more with less effort
  • Develop data pipelines for reporting that helps us understand our business better and make wiser decisions
  • Maintain technical documentation for the operation of the solutions
  • Work closely with the IT teams to ensure that the developed solutions can be monitored and maintained effectively

Your skills and attributes of success:

  • BE/BS in Information Technology and/or equivalent years of education and experience working in a related field
  • 2+ years of cloud development experience in a professional business context
  • Hands-on experience with Python and SQL, and having some understanding of the following technologies: Infrastructure-as-Code (e.g. Terraform), CI/CD setup, and GoogleCloud Platform services
  • Passion for developing applications, integrations, and end-to-end data process solutions that solve relevant business problems
  • The solution-driven mindset with the ability to learn new technologies while looking for the optimal toolset for the given problem
  • Fluent communication skills in English

What do we offer in return?

  • A highly skilled multinational team
  • Individual training budget and exam fees for partner certifications (Azure, AWS, GCP) and additional certification bonus covered by Nordcloud
  • Access to join and the possibility to create knowledge-sharing sessions within a community of leading cloud professionals
  • Flexible working hours and freedom to choose your tools (laptop and smartphone) and ways of working
  • Freedom to work fully remotely within the country of Finland
  • Local benefits such as extensive private health care, wellness benefits, a high-end laptop, and a smartphone of your choice

      Please read our Recruitment Privacy Policy before applying. All applicants must have the right to work in Finland.

      If you’d like to join us, please send us your CV or LinkedIn profile.

      About Nordcloud

      Nordcloud, an IBM company, is a European leader in cloud advisory, implementation, application development, managed services, and training. It’s a recognized cloud-native pioneer with a proven track record of helping organizations leverage the public cloud in a way that balances quick wins, immediate savings, and sustainable value. Nordcloud is triple-certified across Microsoft Azure, Google Cloud Platform, and Amazon Web Services – and is a Visionary in Gartner’s Magic Quadrant for Public Cloud IT Transformation Services. Nordcloud has 10 European hubs, over 1500 employees, and counting, and has delivered over 1,000 successful cloud projects.

      Learn more at nordcloud.com

      #LI-Remote

      +30d

      Sr. DevOps Engineer - Team Lead

      AgnosRemote
      agileterraformDesignansibleazuredockerkubernetesjenkinsAWS

      Agnos is hiring a Remote Sr. DevOps Engineer - Team Lead

      Sr. DevOps Engineer - Team Lead - Agnos - Career Page (function(d, s, id) { var js, iajs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)){return;} js = d.createElement(s); js.id = id;js.async = true; js.src = "https://apply.indeed.com/indeedapply/static/scripts/app/bo

      See more jobs at Agnos

      Apply for this job

      +30d

      Sr. DevOps Engineer - Team Lead - USA

      AgnosRemote
      agileterraformDesignansibleazuredockerkubernetesjenkinsAWS

      Agnos is hiring a Remote Sr. DevOps Engineer - Team Lead - USA

      Sr. DevOps Engineer - Team Lead - USA - Agnos - Career PageSee more jobs at Agnos

      Apply for this job

      +30d

      AWS Cloud/Dev Ops Engineer - Remote

      QuoreRemote
      terraformsqlDesignansiblepostgresqlmysqljenkinspythonAWS

      Quore is hiring a Remote AWS Cloud/Dev Ops Engineer - Remote

      AWS Cloud/Dev Ops Engineer - Remote - Quore - Career Page

      See more jobs at Quore

      Apply for this job

      +30d

      Data Engineer [Poland]

      EgnytePoznań, Poland
      agileterraformsqlazureapilinuxpython

      Egnyte is hiring a Remote Data Engineer [Poland]

      Description

      The opportunity:

      Egnyte is a provider of enterprise content governance and collaboration software. Our platform empowers companies to work more efficiently and protect their business content. 


      We are looking for a talented Data Engineer, who will help us to develop our Big Data Architecture. You are going to work in the Data Engineering & Analytics team which is responsible for the development of internal data architecture at Egnyte and providing this data in a suitable form to the whole company. There is a place in our team for an expert that will support us with integrating new systems, maintaining and upgrading old integrations, and improving the whole architecture. We are looking for a person who has a drive to make things happen and a can-do attitude.


      It’s an opportunity to get or improve the experience in the areas of:

      • High data processing optimizations across multiple pipelines,
      • High-speed data gathering(DBs and APIs) of millions of data,
      • Distributed computing techniques,
      • Close cross-team cooperation and Data-Driven Decision Making (DDDM),
      • Managing the environments and permissions using Terraform Cloud,
      • Handling multiple Google Cloud Platform services.

      Your day-to-day at Egnyte:

      • Build, improve and test code that moves and manipulates data coming from disparate sources, including massive log and event streams, SQL databases, and online, API-based services
      • Implement features that automatically test data operations for completeness, to ensure that data consumers have access to correct information
      • Get hands-on with the data - analyzing and visualizing - when needed to get the job done, to find issues both in your code and the source systems that feed the data
      • Wear many hats and be ready to learn new technologies. You will be a part of a small team that supports a large, data-driven organization with evolving requirements

      About you:

      • Knowledge of SQL and Python, especially data structures, data manipulation, and exception handling
      • Experience interacting with or pulling data from API-based services
      • Experience with cloud-based ETL/ELT tools (GCP, Azure, etc.)
      • Comfortable in daily Linux usage
      • Attention to detail and strong analytical skills
      • Good English skills to effectively communicate with other team members

      Bonus points:

      • Experience with distributed computing and microservice-based architecture,
      • Experience with GCP products (preferably BigQuery, Cloud Storage, Dataflow, Pub/Sub, Compute Engine, and other data-related GCP services),
      • Experience with Terraform Cloud,
      • Experience in the agile-oriented framework.

      What we can offer you:

      • Salary from 16.000 to 20.000 PLN net + VAT depending on skills and experience,
      • Flexible forms of employment and working hours,
      • 100% remote work possibility
      • Stock options,
      • Your own Egnyte account with lifetime access,
      • 4000 PLN Gross conference budget per person and additional 4 training days each year,
      • MyBenefit: you can choose a MultiSport card or gift card every month,
      • Private medical healthcare,
      • In-house English classes.

      See more jobs at Egnyte

      Apply for this job

      +30d

      DevOps Engineer (SRE focus) (f/m/x)

      omni:usGermany Remote
      terraformansibleazurejavadockerpostgresqlkubernetesubuntulinuxpythonAWS

      omni:us is hiring a Remote DevOps Engineer (SRE focus) (f/m/x)

      omni:us is an Artificial Intelligence as a Service (AIaaS) provider for cognitive claims management. Built on a fully data-driven approach, omni:us is transforming the way insurers interact with their customers. Omni:us provides all the necessary tools and information to make fast, transparent and empathetic claims decisions, whilst improving operational efficiency and reducing loss adjustment expenses. We are proud to count Allianz, Vienna Insurance Group, AmTrust, and UNIQA among our clients.

      Founded in 2015, omni:us now employs 65 people from more than 20 different nations. We are a team of leading scientific engineers, academic AI pioneers, full stack experts, industry experts led by serial entrepreneurs with strong business to business & corporate experience. Our investors & network combine deep technology, industry and solution expertise. We’re headquartered in Berlin with representations in the UK, France and the United States.

      As we want to continue delivering amazing work while having fun at the same time, we are searching for a DevOps Engineer to join us in our venture!

      A big plus if you also have:

      • Experience in building or operating multi-tenant SaaS solutions
      • Experience with provisioning tools like Ansible, Chef, Puppet or Terraform cross cloud provisioning system
      • Experience with one or more programming languages like Python, Java, Shell

      What will keep you challenged?

      • Further automate our operations by building and maintaining tools for continuous deployment, monitoring, and alerting
      • Work closely and collaboratively with our data science and engineering teams to provide best possible solutions to our customers
      • Look for ways to improve the overall performance of our platform
      • Maintain and optimize our SaaS infrastructure built on Kubernetes and other open-source technologies like Java, Python, and PostgreSQL
      • Onboard new customers into out SaaS infrastructure

      What Will Help You Succeed?

        • 4+ years hands-on experience in systems engineering
        • have good experience with at least one cloud service provider like AWS, Azure, or GCP
        • Good experience with Linux Systems, mostly CentOS and Ubuntu
        • Knowledge with Kubernetes and Docker in production
        • Experience in GitOps methodology

        Anything Else?

        We’re a group of curious and collaborative visionaries, and we’re excited about tackling industry-first problems. We value respectful, honest people who are able to discuss and resolve problems in a constructive way. We’re driven to balance getting things done with following best practices. You’ll fit in well here if you value being part of and contributing to an amazing work community sharing these values.

        What’s in it For You?

        • A modern technology stack: Kubernetes, Docker, CircleCI, Microservices, GoogleCloud/AWS, TensorFlow, Distributed Computing, Java, Python.
        • A highly motivated, fully committed team that is eager to change the insurance space and appreciates support, guidance and teamwork.
        • A real challenge – full ownership of the product.
        • A truly global company - our team members span over 20 nationalities.
        • Other benefits include; unlimited remote work, 27 vacation days, transportation pass discounts, Urban Sports Club membership, company/team outings, and more.

        omni:us is an equal opportunity employer. We are committed to maintaining an inclusive environment where unique perspectives contribute to groundbreaking solutions. omni:us welcomes people regardless of race, sexual orientation, gender identity or expression, political and religious affiliation, socio-economic background, cultural background, geographic location, physical or mental disabilities, abilities, relationship status, age, education and any other category protected by law. People from underrepresented communities are strongly encouraged to apply.

        See more jobs at omni:us

        Apply for this job

        +30d

        Data Platform Engineer (m/f/d)

        JimdoHamburg, DE Remote
        3 years of experiencekotlintableauterraformairflowDesignqarubyjavadockerkuberneteslinuxpythonAWS

        Jimdo is hiring a Remote Data Platform Engineer (m/f/d)

        Our mission

        At Jimdo, we’re big on small. Our mission is to unleash the power of the self-employed and small businesses—and help them thrive. Small businesses are the backbone of the global economy, but they receive little support or recognition. We see them and are here to support them. Join us to help design intuitive tools that enable small businesses to solve complex problems.

        We run at a steady pace to achieve what we aim for. We learn best by digging deep into data, staying curious, taking calculated risks, and sometimes even falling down along the way. It’s the lessons we learn in the process that make us better problem-solvers for small business owners.

        If you’re motivated by our mission and excited to roll up your sleeves, experiment, learn from mistakes, and make a difference to small businesses around the world, we would love to get to know you.

        The Team

        The Data Platform team is developing, operating, and improving a highly scalable, robust, and resilient data infrastructure, which is the backbone of all data services, the central data warehouse, and our reporting & analytics infrastructure. As business needs are growing and becoming more diverse, the team plans to increase our systems' scalability and introduce new services for a variety of use cases, ranging from core infrastructure and Data/DevOps tasks to advanced monitoring and anomaly detection. The team cooperates with the Analytics teams in the Data Department to maximise the business impact and works closely with the Jimdo infrastructure teams.

        Our expectations

        You have 3 years of experience in one or more topics:

        • Operating Linux or Docker
        • AWS
        • Software development (Java or python)
        • Infrastructure as code (terraform, cloudformation etc)
        • CI/CD pipelines
        • Data related topics: Redshift, Snowflake, Airflow, dbt etc

        You’ll be part off a team which does the following:

        • Design, build and operate a highly scalable data platform, further advancing our approach to designing robust, self-healing, resilient systems.
        • Implement advanced monitoring and alerting with respect to the data infrastructure as well as the data, the data flows, and pipelines, this also includes anomaly detection.
        • Ensure high test coverage and improve our QA and testing concepts with respect to the data pipelines and workflows.
        • Educate and consult data & analytics engineers on designing, building, and operating maintainable, scalable, and reliable data services and workflows.
        • Be responsible for the overall system's health of the data infrastructure.

        Some of the technologies you will work with and learn:

        • AWS
        • Kubernetes / Docker
        • Github-Actions / Terraform / Terragrunt / Atlantis
        • Kafka
        • Java / Python / Kotlin
        • Airflow / DBT / Redshift / Tableau

        What We Value

        • Jimdo's success is rooted in no small part in consequently using state-of-the-art cloud services. We are looking for engineers that have a solid grasp of cloud technologies and have a strong interest in distributed systems.
        • Our data infrastructure and the services running on top of it ultimately contribute to the success of our several millions of customers and we believe that in the future data will play an even more significant role both for our users and for Jimdo. You fit right in if you share the same view about creating value from data and have experience building and operating great tooling for this purpose.
        • We leverage different technologies and languages depending on the problem we try to solve, so we value people who are able to pick up new languages and tools when necessary and are able to find the right tool for the job at hand. Currently, we use e.g. Python and Java, but also some Ruby and Kotlin.
        • You have excellent problem-solving skills. You use a systematic and thorough approach. You think from the first principles. You have a bias for action and know how to diagnose and resolve problems within complex systems.

        Jimdo is proud to be an equal-opportunity employer. This means that we don't discriminate based on race or ethnic origin, color, the language(s) you speak, where you (or your parents) are from, or whether or not you consider yourself to have a disability. Neither will your age, gender, gender identity, sexual orientation, religion, beliefs, or political opinions play a part in your application with us. We're a diverse team in so many ways, and we love it that way.

        Vasiliki is looking forward to receiving your application.

        By sending your application, you declare that you have read and understood the Jimdo Applicant Privacy Policy.

        See more jobs at Jimdo

        Apply for this job

        +30d

        DevOps/ Data Engineer - (m/f/d)

        JimdoHamburg, DE Remote
        3 years of experiencekotlintableauterraformairflowDesignqarubyjavadockerkuberneteslinuxpythonAWS

        Jimdo is hiring a Remote DevOps/ Data Engineer - (m/f/d)

        Our mission

        At Jimdo, we’re big on small. Our mission is to unleash the power of the self-employed and small businesses—and help them thrive. Small businesses are the backbone of the global economy, but they receive little support or recognition. We see them and are here to support them. Join us to help design intuitive tools that enable small businesses to solve complex problems.

        We run at a steady pace to achieve what we aim for. We learn best by digging deep into data, staying curious, taking calculated risks, and sometimes even falling down along the way. It’s the lessons we learn in the process that make us better problem-solvers for small business owners.

        If you’re motivated by our mission and excited to roll up your sleeves, experiment, learn from mistakes, and make a difference to small businesses around the world, we would love to get to know you.

        The Team

        The Data Platform team is developing, operating, and improving a highly scalable, robust, and resilient data infrastructure, which is the backbone of all data services, the central data warehouse, and our reporting & analytics infrastructure. As business needs are growing and becoming more diverse, the team plans to increase our systems' scalability and introduce new services for a variety of use cases, ranging from core infrastructure and Data/DevOps tasks to advanced monitoring and anomaly detection. The team cooperates with the Analytics teams in the Data Department to maximise the business impact and works closely with the Jimdo infrastructure teams.

        Our expectations

        You have 3 years of experience in one or more topics:

        • Operating Linux or Docker
        • AWS
        • Software development (Java or python)
        • Infrastructure as code (terraform, cloudformation etc)
        • CI/CD pipelines
        • Data related topics: Redshift, Snowflake, Airflow, dbt etc

        You’ll be part off a team which does the following:

        • Design, build and operate a highly scalable data platform, further advancing our approach to designing robust, self-healing, resilient systems.
        • Implement advanced monitoring and alerting with respect to the data infrastructure as well as the data, the data flows, and pipelines, this also includes anomaly detection.
        • Ensure high test coverage and improve our QA and testing concepts with respect to the data pipelines and workflows.
        • Educate and consult data & analytics engineers on designing, building, and operating maintainable, scalable, and reliable data services and workflows.
        • Be responsible for the overall system's health of the data infrastructure.

        Some of the technologies you will work with and learn:

        • AWS
        • Kubernetes / Docker
        • Github-Actions / Terraform / Terragrunt / Atlantis
        • Kafka
        • Java / Python / Kotlin
        • Airflow / DBT / Redshift / Tableau

        What We Value

        • Jimdo's success is rooted in no small part in consequently using state-of-the-art cloud services. We are looking for engineers that have a solid grasp of cloud technologies and have a strong interest in distributed systems.
        • Our data infrastructure and the services running on top of it ultimately contribute to the success of our several millions of customers and we believe that in the future data will play an even more significant role both for our users and for Jimdo. You fit right in if you share the same view about creating value from data and have experience building and operating great tooling for this purpose.
        • We leverage different technologies and languages depending on the problem we try to solve, so we value people who are able to pick up new languages and tools when necessary and are able to find the right tool for the job at hand. Currently, we use e.g. Python and Java, but also some Ruby and Kotlin.
        • You have excellent problem-solving skills. You use a systematic and thorough approach. You think from the first principles. You have a bias for action and know how to diagnose and resolve problems within complex systems.

        Jimdo is proud to be an equal-opportunity employer. This means that we don't discriminate based on race or ethnic origin, color, the language(s) you speak, where you (or your parents) are from, or whether or not you consider yourself to have a disability. Neither will your age, gender, gender identity, sexual orientation, religion, beliefs, or political opinions play a part in your application with us. We're a diverse team in so many ways, and we love it that way.

        Vasiliki is looking forward to receiving your application.

        By sending your application, you declare that you have read and understood the Jimdo Applicant Privacy Policy.

        See more jobs at Jimdo

        Apply for this job

        +30d

        Infrastructure Engineer

        causaLensRemote job, Remote
        agileterraformazuredockerkubernetespythonAWS

        causaLens is hiring a Remote Infrastructure Engineer

        causaLens are the pioneers of Causal AI — a giant leap in machine intelligence.

        We build Causal AI-powered products that are trusted by leading organizations across a wide range of industries. Our No-Code Causal AI Platform empowers all types of users to make superior decisions through an intuitive user interface. We are creating a world in which humans can trust machines with the greatest challenges in the economy, society, and healthcare.


        Summary 

        We are looking for an Infrastructure based in London to join our Core Infrastructure team in building a platform to optimise every business on the planet. This is a full-time placement with significant opportunities for personal development and growth.


        Roles and Responsibilities

        As an Engineer first, focusing on Infrastructure you’ll be pivotal to implementing and maintaining our core infrastructure as code and ensuring the availability, performance, and security of new and existing infrastructure. You’d have the opportunity to work on multiple cloud architectures and have a real impact on the foundations of our products.

        A successful candidate will be the one that helps the team work smart, fast and clean, eliminating toil, automating where sensible, and making good tech choices.


        What you will be working on

        The Core Infrastructure team is made up of Engineers who are heavily collaborative, focusing on supporting the business needs of stakeholders throughout our company and our customers. Working closely with, enabling and empowering our platform engineering, Machine Learning and Applied Data Science teams. This role is an opportunity either for engineers that have found themselves keenly interested in CNCF and being able to solve infrastructure problems as software engineers, or for DevOps that want to evolve their career into SRE and multi-cloud deployments.

        Some of your responsibilities will include:

        • Automating cloud agnostic infrastructure, reducing toil

        • Enhancing observability and visibility of our infrastructure, systems and processes

        • Enhancing the change pipeline, improving change velocity

        • Empowering teams to self-fulfil, shifting responsibility left

        You’ll be using Kubernetes, Docker and Terraform and working on AWS, GCP and Azure clusters.

        See more jobs at causaLens

        Apply for this job

        +30d

        Senior Data Engineer (USA Remote)

        Blue Orange DigitalNew York (Remote), NY Remote
        6 years of experienceterraformairflowsqlDesignazuredockerlinuxpythonAWS

        Blue Orange Digital is hiring a Remote Senior Data Engineer (USA Remote)

        Blue Orange is seeking a Senior Azure Data Engineer to join our team to help build up our data engineering practice. Our Platform Engineers require a diverse skill set including system administration, DevOps, infrastructure automation, data modeling, and workflow orchestration. Blue Orange builds enterprise data platforms and systems for a variety of clients, so this candidate should have experience with supporting modern data technologies. The ideal candidate will have experience with multiple data engineering technologies across multiple clouds and deployment scenarios. In particular, we’re looking for someone with experience with Azure DevOps, Snowflake, Airflow, and dbt.

        This is a full-time fully remote position.

        Core Responsibilities & Skills:

        • Work with data teams to help design, build and deploy data platforms in the cloud (Azure, AWS, GCP) and automate their operation.
        • Work with Azure DevOps, Terraform, CloudFormation, and other Automation and infrastructure tools to build robust systems.
        • Work with Airflow, dbt, and other data orchestration and ETL tools to build high-performance data pipelines.
        • Provide leadership in applying software development principles and best practices, including Continuous Integration, Continuous Delivery/Deployment, and managing Infrastructure as Code, Automated Testing across multiple software applications.
        • Support heterogeneous technologies environments including both Windows and Linux systems.
        • Develop reusable, automated processes, and custom tools.

        Qualifications:

        • BA/BS degree in Computer Science or a related technical field, or equivalent practical experience.
        • At least 6 years of experience building and supporting data platforms; exposure to data technologies like Azure Data Factory, Azure Synapse Analytics, AWS Glue, Airflow, Spark.
        • Experience with Cloud Data Warehouses, Snowflake in particular.
        • Advanced level Python, SQL, and Bash scripting.
        • Experience designing and building robust CI/CD pipelines.
        • Strong Linux system administration skills.
        • Comfortable with Docker, configuration management, and monitoring tools.
        • Knowledge of best practices related to security, performance, and disaster recovery.
        • Experience working in cloud environments, at a minimum experience in Azure and AWS.
        • Enjoys collaborating with other engineers on architecture and sharing designs with the team.
        • Excellent verbal and written English communication.
        • Interacts with others using sound judgment, good humor, and consistent fairness in a fast-paced environment.

        Bonus Points:

        • Hold certifications for Azure DevOps, Azure Data Fundamentals. Snowflake.

        Our Benefits Include:

        • 401k Matching
        • PTO
        • 100% remote role with an option for hybrid
        • Healthcare, Dental, Vision, and Life Insurance

        Salary: USD 130 K - 160 K (per Year)

        Blue Orange Digital is an equal opportunity employer.

        See more jobs at Blue Orange Digital

        Apply for this job

        +30d

        Senior Software Engineer (Pyspark/SQL)

        terraformairflowsqlapidockermysqlpythonAWS

        Cerebral Staffing, LLC is hiring a Remote Senior Software Engineer (Pyspark/SQL)

        Senior Software Engineer (Pyspark/SQL) - Cerebral Staffing, LLC - Career Page

        See more jobs at Cerebral Staffing, LLC

        Apply for this job

        +30d

        DevOps Engineer

        EdquityNew York, NY Remote
        terraformDesignansibleapigitdockerlinuxAWS

        Edquity is hiring a Remote DevOps Engineer

        Company Overview:

        At Edquity(soon to be Beam), we’re building cash assistance technology that empowers institutions and government leaders to deliver funding equitably, efficiently, and securely to those who need it most. Our research-backed approach streamlines the administration of benefits while also addressing common pitfalls in program management — including racial bias, inefficiency, compliance risk, and lack of transparency and monitoring — to quickly deliver funds to those with the greatest need. Since 2020, we’ve delivered more $140M in cash grants to more than 150,000 people in need through our platform.

        Beam is a Series A stage, venture-backed company and has received support from many of the leading impact and postsecondary success investors and has also received non-dilutive support from foundations like the Bill and Melinda Gates Foundation.


        Job Overview:

        Beam seeks a DevOps Engineer to maintain and scale Beam’s infrastructure. The DevOps Engineer will be responsible for improving automation, infrastructure reliability, and enabling engineering teams to use new technologies in a scalable, reliable, and highly available way. This position will be an advocate for environmental consistency, understanding the role that code plays in achieving that goal. Orchestration, configuration management and scripting are skillsets at the core of your toolbox, and you know the values and trade-offs of each. You believe that observability is a key component of DevOps, ensuring that monitoring, logging and tracing are priorities, not afterthoughts.


        Responsibilities:

        • Design, test and implement continuous integration and deployment pipelines using GitLab CI
        • Design and develop automation tools and frameworks used across the entire development stack
        • Optimize system performance, availability and scalability
        • Troubleshoot source code management and deployment issues
        • Build and maintain IaC for AWS cloud deployments with tools like Terraform
        • Create and maintain documentation on configuration, troubleshooting, design etc.
        • Perform security audits and assist with hardening servers and systems against attacks
        • Assist with IT and compliance, as needed


        Qualifications:

        • AWS: Security consciousness, adhering to the principle of least privilege access; Exposure to IAM, S3, EKS, lambda, API Gateway, CloudWatch, Kinesis, VPC(s) networking
        • CI/CD: Able to design and build CI/CD pipelines using any tool, preferably Gitlab
        • Configuration Management: Knowledge and experience with configuration management tools such as: Salt, Ansible, Chef, Puppet
        • Development: Familiarity with at least one programming language; Able to write a basic bash script for automation; Comfortable using git
        • Kubernetes/Docker: An understanding of k8s infrastructure (Deployment, StatefulSet, Ingress, Certificates); An understanding of how to build docker images, run docker containers, and troubleshoot when a container doesn’t start correctly
        • Monitoring: Strong knowledge of Datadog and cloudWatch or similar; In-depth knowledge of Linux server environments; Knowledge of database systems and security
        • Terraform: Knowledge of basic module calls is a must; Knowledge advanced Terraform syntax such as ‘dynamic’ is a plus
        • A commitment to and passion for Beam’s mission, vision, and values


        Compensation and Benefits:

        The salary range for this position has been benchmarked in relation to the scope of the role, market rate, company stage, and internal equity. The salary for this role will be between $120,000 - $135,000. Where a candidate falls within the band is determined by skillset, experience level, and geographic location. In addition to base salary, this role will come with a total compensation package that includes equity shares and competitive benefits. Some of our benefits include:

        • Fully paid health insurance (Medical/Vision/Dental)
        • Unlimited PTO, Sick and Mental Health Benefits
        • 11 paid company holidays
        • 401k with a 4% match
        • Generous parental leave
        • Annual Professional Development Stipend
        • One time Home Office Setup Stipend
        • Equity in Beam
        • Many more!

        Beam is committed to building a diverse staff and strongly encourages applications from candidates of color. Beam provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

        See more jobs at Edquity

        Apply for this job

        +30d

        Senior Network Security Engineer

        terraformDesignazureAWS

        BlueVoyant is hiring a Remote Senior Network Security Engineer

        Senior Network Security Engineer - BlueVoyant - Career Page

        See more jobs at BlueVoyant

        Apply for this job

        +30d

        Principal DevSecOps Engineer

        ZantechFort Belvoir, VA Remote
        agileterraformDesignkubernetes

        Zantech is hiring a Remote Principal DevSecOps Engineer

        Are you looking for your next challenge? Are you ready to work with a performance-based small company? At Zantech, we are a dynamic Small Business focused on providing complex, mission focused solutions with a proven track record of outstanding customer performance and high employee satisfaction. We are looking for you; someone who strives to build a strong team and to deliver “Outstanding Performance…. Always!”. If so, we would love to talk with you regarding that next step in your career.

        Zantech is looking for an talented Principal DevSecOps Engineer with an active Top Secret Security Clearance for an upcoming role supporting the an Army Client.

        Position Skills and Responsibilities:

        • Lead the planning and design effort for all tasks in the backlog (e.g. POA&M items, component upgrades, feature requests, bug fixes)
        • Implement tasks in the backlog (e.g. POA&M items, component upgrades, feature requests, bug fixes)
        • Review all changes to the contract baseline
        • Ensure the software can be deployed successfully in all environments (AC2SP Virtual Private Clouds on all domains)
        • Ensure the security controls are implemented correctly (e.g. no regressions) with each release
        • Ensure the Body of Evidence (BOE) is up-to-date with each release

        Required Knowledge, Skills and Abilities:

        • BS in Science, Technology, Engineering or Mathematics with 5 years’ experience as an engineering lead OR without a degree have at least 10 years’ experience as an engineering lead
        • At least 5 years’ experience applying agile methodologies to the system development life cycle
        • 5 years’ experience with Infrastructure-as-Code / Configuration-as-Code
        • 5 years’ experience with Commercial Cloud Services
        • 3 years’ experience with Kubernetes
        • 2 years’ experience with GitOps
        • 2 years’ experience with the Platform One Big Bang service
        • 1 year experience with the assessment and accreditation of a national security system (NSS)

        Desired Knowledge, Skills and Abilities:

        • 3 years’ experience with Terraform
        • 3 years’ experience with Amazon Web Services for the DoD/IC (e.g. C2S)
        • Effective oral and written communication skills
        • Presentation and meeting facilitation experience

        Required Security Clearance:

        • Current Top Secret clearance per contract requirements with eligibility for SCI and NATO read-on prior to starting work

        Location:

        • Offsite full-time and available to support onsite within 24 hours

        Outstanding Performance…Always!”

        Our corporate motto represents our commitment to build long-term relationships with both our clients and our employees by providing the highest quality service in everything we do. We strive for excellence for our clients and for each other.

        We embrace the opportunity to hire individuals with new talents and fresh perspectives. Zantech offers a competitive compensation, strong benefits, and vacation package, as well as providing you with a fast paced and exciting work environment. Come join our team!

        Zantech provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

        See more jobs at Zantech

        Apply for this job

        +30d

        Sr. Site Reliability Engineer

        Wiser SolutionsRemote, India
        terraformnosqlpostgresRabbitMQDesignmongodbjavadockerelasticsearchkuberneteslinuxpythonAWSbackendNode.js

        Wiser Solutions is hiring a Remote Sr. Site Reliability Engineer

        Sr. Site Reliability Engineer - Wiser Solutions - Career PageSee more jobs at Wiser Solutions

        Apply for this job