Hyderabad - India

Employment Type :
Full-time

Experience :
4-7 Years

Overview :

Data Engineer with strong DevOps and analytical skills to design, build, and maintain scalable data pipelines for financial data. This role supports Engineering, Operations and analytics teams by delivering high-quality, reliable datasets and automated workflows.

Key Responsibilities :

  • Build and maintain ETL/ELT pipelines for financial datasets.
  • Develop data pipelines and optimized structures for analytics and reporting.
  • infrastructure-as-code, and pipeline monitoring (DevOps/DataOps).
  • Ensure data quality, lineage, and governance across platforms.
  • Collaborate with Engineering, Operations and Analytics teams to support reporting and insights.
  • Optimize pipeline performance, reliability, and cloud resource usage for cost optimization.

Required Qualifications :

  • Masters degree in computer science, Data Engineering, Information Systems, or related field.
  • 3–5 years of hands-on experience in data engineering.
  • Strong SQL and Python; experience with Spark or equivalent.
  • Experience with cloud data platforms (Spanner, BigQuery, Redshift, or similar).
  • Proficiency with DevOps tools (Git, CI/CD pipelines, Terraform/IaC).
  • Experience with orchestration tools (Airflow, Pentaho).
  • Understanding of financial datasets and domain concepts.
  • Knowledge of data quality/observability tools.

Employment Type :
Full-time

Experience :
5+ Years

Technical Skills :

  • Minimum 3 years of experience in Splunk real-time deployments and configuration of Cribl worker nodes and filtering.
  • Minimum 3 years of experience in Splunk Administration and operational support.
  • Hands on experience in using version control tools such as Git/GitHub.
  • Hands on with log management systems like syslog-ng or rsyslog.
  • Intermediate or advanced level in any scripting or Python languages.
  • Experienced in working with business partners to gather and interpret requirements
  • Effective documentation, communication, and interpersonal skills able to collaborate within the immediate team as well as with other groups in IT.

Preferred Skills :

  • Hands-on experience in managing Splunk & Cribl infrastructure and Enterprise Security configurations.
  • Splunk Architect certification or equivalent would be an added advantage.

Responsibilities :

Softility Inc. seeks a potential Splunk consultant with minimum 5-6 years of experience focused on Splunk Core responsibilities like architecting Splunk Enterprise set-up and managing the high availability.

  • This role will join the Softility – Observability & Cloud Solutions Practice that is responsible for managing the Multi tenancy Splunk & Cribl Enterprise of reputed clients with vast infrastructure located in various locations across the globe.
  • This is a strategic position and will be instrumental in the design, implementation, support, performance, optimization and integrity of the Logging ecosystem
  • You will work closely with multiple stakeholders and global partners.
  • This is a multi-disciplinary role that will interact directly with developers and different IT functions including Security Engineering teams to;
  • Integration of various applications and databases to Splunk Enterprise.
  • Analyze the existing Splunk set-up to assess the data flow from log sources.
  • Identify the data size inflow to Splunk & Cribl and charter action plan for optimization of data.
  • Analyze the required Splunk & Cribl specifications to set up seamless logging flow for Greenfield regions.
  • Independently manage and execute the one-time set up and administer the on-going activities.
  • Configure Index and Search Head clustering and integrate with Enterprise Security Search heads.
  • Configure Cribl workers and leader to ensure log ingest from sources flow through Cribl stream with necessary optimization filtering across the pipelines.
  • Enable connectivity between multi tenancy Splunk and Cribl for seamless InfoSec monitoring.
  • Review and identify the noise and unwanted log flow ingest and prepare the estimates for Leadership review
  • Clearly communicate the risk stakes and business impact that may occur in infrastructure changes.
  • Brainstorm on the probable approaches and best practices in Logging implementations.
  • Handle change management and work as On-Call if required.

Employment Type :
Full-time

Experience :
2-15 Years

Required Skills :

  • ELK Cloud Migration & Implementation: Lead end-to-end ELK Cloud SaaS migrations, ensuring seamless integration and scalability.
  • Observability & Monitoring: Design and implement logging, monitoring, and alerting solutions tailored for cloud-based architectures.
  • Kibana & Grafana Dashboards: Develop advanced real-time visualizations and dashboards for enhanced system performance tracking.
  • Log Ingestion & Processing: Configure and optimize Logstash pipelines to manage structured and unstructured log data.
  • Elasticsearch Performance Tuning: Optimize indexing, search, and cluster performance in a cloud environment.
  • Security & Compliance: Implement role-based access controls, encryption, and security best practices for ELK Cloud deployments.
  • Incident Response & Troubleshooting: Set up proactive alerting systems and troubleshoot issues in real-time.

Job Description :

  • We are looking for an experienced ELK Cloud SaaS Engineer with a strong background in ELK Stack (Elasticsearch, Logstash, Kibana) and Grafana. 
  • The ideal candidate should have hands-on experience in migrating and implementing ELK solutions in a Cloud SaaS environment and optimizing observability frameworks for enterprise applications.

Required Skills & Qualifications :

  • 3+ years of hands-on experience with ELK Stack & Grafana in large-scale cloud deployments.
  • Expertise in ELK Cloud SaaS migrations and best practices.
  • Strong knowledge of AWS/GCP/Azure logging and observability solutions.
  • Experience with log pipeline management (Logstash, Fluentd, Kafka, etc.).
  • Proficiency in writing and optimizing Elasticsearch queries for analytics.
  • Hands-on experience with Terraform, Ansible, or Kubernetes for cloud automation (preferred).

Employment Type :
Full-time

Experience :
3+ Years

Required Skills :

  • Strong hands-on experience with ELK Stack (Elasticsearch, Logstash, Kibana).
  • Experience with log shippers (Filebeat, Fluentd, Fluent Bit, etc.)
  • Solid understanding of Kubernetes concepts (Pods, Services, ConfigMaps, Helm, etc.)
  • Working experience with Google Cloud Platform (GCP).
  • Proficiency in Linux system administration.
  • Experience with REST APIs and JSON.
  • Knowledge of index lifecycle management (ILM) and Elasticsearch performance tuning.

Cloud & DevOps:

  • Experience with GKE (Google Kubernetes Engine)
  • Familiarity with CI/CD pipelines
  • Exposure to Terraform, Ansible, or similar IaC tools (preferred)

Good to Have:

  • Experience with APM, metrics, or observability tools
  • Knowledge of Prometheus, Grafana
  • Scripting skills (Bash, Python, or similar)
  • Experience with multi-cluster or multi-cloud logging architectures

Responsibilities:

  • Design, implement, and manage ELK Stack (Elasticsearch, Logstash, Kibana) solutions for centralized logging and analytics.
  • Configure and manage log shippers such as Filebeat, Metricbeat, Fluentd, or Fluent Bit.
  • Develop and optimize Logstash pipelines for log parsing, filtering, enrichment, and routing.
  • Create and maintain Kibana dashboards, visualizations, and alerts.
  • Deploy and manage ELK components in Kubernetes (K8s) environments.
  • Integrate logging solutions with GCP services (GKE, Compute Engine, Cloud Logging, Pub/Sub, etc.)
  • Ensure high availability, scalability, and performance of logging infrastructure.
  • Troubleshoot log ingestion, indexing, and visualization issues.
  • Implement security best practices (RBAC, TLS, authentication, authorization.
  • Collaborate with DevOps, SRE, and application teams to improve observability.
  • Automate deployments using Helm, YAML manifests, or Infrastructure as Code (IaC) tools.
  • Monitor cluster health and optimize Elasticsearch performance and storage.

Employment Type :
Full-time

Experience :
7 to 12 Years

Required Skills :

  • 5+ years of SRE/DevOps production operation in large-scale, Internet-facing infrastructure
  • AWS/GCP Certification or Solution Architecture Certification is a plus.
  • Direct experience building cloud-based infrastructure solutions in AWS or GCP with a strong knowledge of governance models.
  • Strong knowledge of Linux operating systems (RHEL/CentOS/Debian) and their fundamentals.
  • Expertise in continuous integration, continuous deployment, application performance monitoring, and alerting systems
  • Experience in multiple scripting languages: Python, Bash, Perl, JavaScript
  • Experience in supporting containerized workloads, Docker, EKS, GKE, Kubernetes
  • Strong cloud automation experience with tools such as Terraform, Docker, Ansible, Chef, etc.
  • Expertise with cloud monitoring and management systems
  • Expertise in DevOps tools: Jenkins, Git, Bitbucket, Jira
  • Expertise in management tools Prometheus, ELK, CloudWatch, StackDriver
  • Experience in working in/with Agile processes (Scaled Agile Framework/SAFe preferred) and tools (preferably Jira)
  • Experience in working in/with ITSM processes (incident management, problem management, change management, configuration management) and tools (preferably Jira JSM)
  • Strong problem-solving and analytical skills
  • Strong understanding and experience operating in an agile development environment.
  • Ability to multitask, work well under pressure, and prioritize work against competing deadlines and changing business priorities
  • Bachelor’s degree in a relevant field or equivalent work experience.

Responsibilities :

  • The DevOps Engineer is responsible for designing, provisioning, monitoring, and maintaining platform services.
  • The successful candidate requires experience in systems engineering and cloud automation (AWS and/or GCP), being a critical thinker, and a scripter with strong coding skills that will be used to automate repeatable tasks.
  • Additionally, the candidate must be highly organized with strong communication skills and a mindset for continuous improvement.
  • Responsible for the design, creation, configuration, and delivery of cloud infrastructure environments using automation best practices and a proactive strategy.
  • Responsible for automating the infrastructure built in cloud environments.
  • Responsible for providing technology assessments in support of automation and migration to the cloud.
  • Responsible for the automation of operational processes.
  • Responsible for developing and driving improvements, maintaining/monitoring production and non-production systems, and ensuring platforms perform at maximum efficiency and security.
  • Automate the full SDLC lifecycle process for application and infrastructure as code workloads
  • Automate business continuity/disaster recovery
  • Participate in rotational 24/7 on-call
  • Perform root cause analysis for service performance incidents

Employment Type :
Full-time

Experience :
5 to 15 Years

Required Skills :

  • Expertise in PostgreSQL database management and administration
  • Expertise in database backup and recovery solutions.
  • Expertise in database security practices.
  • Expertise in at least one shell scripting language
  • Deep understanding of performance tuning, including indexing strategies, query optimization, and monitoring tools.
  • Strong knowledge of AWS RDS and AWS DMS
  • Experience using PostgreSQL extensions
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and teamwork abilities.

Preferred Qualifications :

  • Certification in PostgreSQL administration.
  • Experience with other cloud database solutions
  • Experience with Terraform and Ansible
  • Experience with Python scripting
  • Experience administering Microsoft SQL Server, Oracle, or Yugabyte
  • Experience using Quest Foglight to solve performance problems

Responsibilities :

  • Database Management, Installation, and Configuration.
  • PostgreSQL for high performance, including tuning parameters and optimizing hardware.
  • High Availability and Replication: Setting up and managing high availability solutions, such as replication, clustering, and failover mechanisms. Plan and execute data migration projects to PostgreSQL or other database platforms across public/private cloud environments.
  • Query Optimization: Writing and optimizing complex SQL queries, including joins, subqueries, and window functions.
  • Advanced Data Manipulation: Handling complex data manipulation tasks, including bulk data operations and data transformations.
  • Scripting and Automation: Writing scripts to automate routine database tasks and maintenance.
  • Security: Implementing security measures, such as encryption, auditing, and fine-grained access control.
  • Troubleshooting: Diagnosing and resolving complex issues, often involving deep dives into logs, system performance metrics, and query execution plans.
  • Documentation: Create and maintain database documentation, including data standards, procedures, and operational runbooks.
  • Collaboration: Work closely with developers, system administrators, and other stakeholders to ensure database integrity and security.

Employment Type :
Full-time

Experience :
5 to 15 Years

Required Skills :

  • Bachelor’s degree in Computer Science or a related discipline, or equivalent experience with Oracle or similar relational database technologies
  • At least 10 years of Oracle DBA experience
  • Must be an Oracle Certified Professional
  • Thorough knowledge of Oracle technology, such as CDB/PDB, RAC, DataGuard, Exadata, Goldengate, and Grid Control
  • Understanding of advanced physical database design concepts like database partitioning, denormalization, buffer pool tuning, and index optimization.
  • Strong knowledge of at least one scripting language
  • Excellent knowledge of SQL, PL/SQL, Packages, and Stored Procedures
  • Knowledge and experience with DynamoDB, PostgreSQL, or AWS Aurora is a plus
  • Good communication skills and must work well in a team environment
  • Required to provide 24/7 production support on a rotation schedule

Responsibilities :

  • Provide expertise to DBAs to address a specific issue or a project
  • Solve complex production issues to closure
  • Upgrade databases from version 12c to 19c
  • Migrate and convert Oracle databases to PostgreSQL
  • Migrate on-prem Oracle databases to AWS Aurora
  • Migrate Oracle databases from legacy hardware to Oracle Exadata machines
  • Implement database encryption, compression, and resource management plans
  • Create replication between databases, whether on-prem or in a cloud environment
  • Apply database patchsets as required by Infosecurity requirements
  • Assist application teams in deployments and release work
  • Serves as a strong technical resource and works independently with systems management, project managers, etc.

Employment Type :
Full-time

Experience :
6 to 8 Years

Required Skills :

  • Ab Initio
  • SQL
  • ETL
  • Data Warehousing
  • Unix/Linux
  • Shell Scripting
  • Performance Tuning
  • Technical Documentation

Qualifications :

  • Bachelor’s degree in Computer Science, Information Technology, or related field.
  • 6-14 years of experience in ETL development with Abinitio.
  • Strong understanding of data warehousing concepts and ETL processes.
  • Experience with SQL and relational databases.
  • Ability to work independently and in a team environment.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and interpersonal skills.

Responsibilities :

  • Design and develop ETL processes using Ab Initio software.
  • Collaborate with data architects and analysts to understand data requirements.
  • Create and maintain technical documentation for ETL processes.
  • Optimize performance of data processing systems.
  • Ensure data quality and integrity throughout the ETL process.
  • Perform unit testing and support system integration testing.
  • Troubleshoot and resolve data integration issues.
  • Stay abreast of new technologies and best practices in data integration.

Employment Type :
Full-time

Experience :
6 to 10 Years

Required Skills :

  • Experience in designing and building RESTful APIs using microservice architecture.
  • Solid understanding of software design patterns, data structures, and algorithms.
  • Strong debugging, problem-solving, and performance optimization skills.

Preferred Skills :

  • Experience working in Agile development environments.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes).
  • Experience with the Scala Akka framework or a similar framework.
  • Working experience using Apache Kafka Consumer / Producer APIs.
  • Familiarity with OSS (Operations Support Systems) concepts, SNMP protocols, or experience in telecom environments.

Responsibilities :

  • Learn and understand the architecture and workflows of a complex, existing distributed system.
  • Design, develop, and maintain new features and modules in Scala or other JVM-based languages (e.g., Java, Kotlin).
  • Optimize and refactor existing code to improve system performance and reliability.
  • Work on real-time data pipelines using stream processing frameworks such as Kafka Streams, Apache Flink, or Spark Streaming.
  • Manage and query distributed data stores such as Apache Ignite, Redis, or Cassandra.
  • Implement and maintain monitoring, alerting, and visualization dashboards using Grafana, Prometheus, or Kibana.
  • Support log aggregation and analysis through platforms such as Splunk, ELK Stack, or Graylog.
  • Contribute to building and releasing automation through CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions).
  • Collaborate with cross-functional teams to ensure code quality, performance, and reliability.
  • Participate in code reviews, system design discussions, and continuous improvement initiatives.

Employment Type :
Full-time

Experience :
7 to 12 Years

Required Skills :

  • Strong experience in .NET Framework / .NET Core development.
  • Proficiency in SQL development and database optimization.
  • Working knowledge of AWS services (EC2, S3, Lambda, etc.).
  • Experience in CI/CD implementation using ADO and Harness.
  • Basic exposure to UI frameworks (e.g., Angular, React, or similar).
  • Understanding of DevOps principles and agile methodologies.
  • Knowledge of the Decisioning Portal and the OneTru Portal is a plus.

Preferred Qualifications :

  • Certification in AWS or Azure DevOps.
  • Strong problem-solving and analytical skills.
  • Excellent communication and teamwork abilities.

Preferred Qualifications :

  • Design, develop, and maintain applications using .NET technologies.
  • Write optimized SQL queries, stored procedures, and database scripts.
  • Collaborate with cross-functional teams to deploy and maintain applications on AWS.
  • Participate in code reviews, troubleshooting, and performance tuning.
  • Work closely with UI teams to integrate front-end and back-end systems.
  • Contribute to improving automation, reliability, and deployment processes.