A student from various disciplines can Apply. Interested and eligible candidates can read more details below.

Primary Responsibilities:

  • Deliver professional level technical work in support of the development of company products, tools, platforms and services related to big data, typically for an external customer or end user
  • Operates within established methodologies, procedures, and guidelines
  • Gather and analyse business requirements
  • Be able to analyse Big Data infrastructure and solutions
  • Develop technical specifications and design documents
  • Develop, enhance, integrate, test and deploy Big Data Solutions
  • Be able to solve technical and strategic challenges using innovative approaches
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

For Freshers & Software Jobs Visit our website

Apply for PwcApply for oracle
Apply for ZohoApply for Infosys
Apply for WiproApply for Genpact
Apply for AppleApply for siemens
Apply for VolvoApply for Microsoft
Apply for DeloitteApply for solugenix
Apply for LogitechApply for Accenture
Apply for MindtreeApply for Capgemini
Apply for J.P MorganApply for ServiceNow
Apply for Developer associateApply for Hexaware Technology

Required Qualifications:

  • B.Tech/B.E degree in applicable area of expertise or equivalent experience
  • 3 – 5 years of Bigdata Engineering IT work experience as a developer, engineer or similar role (corporate employer full time)
  • 3+ years of hands-on experience in technical stack
    • Spark/Scala
    • Hadoop Ecosystem (Hive, HBase, HDFS)
    • any cloud (Azure/AWS/GCP)
    • Apache Kafka
    • RDBMS (Sound SQL knowledge)
    • NoSQL
    • Shell Scripting
  • 2+ years of work Exposure in Azure/ AWS/ GCP cloud
  • Experience in developing/coding software components in Bigdata, that includes various tools like Map Reduce, Hive, Swoop, HBase, Pig, Kafka, Spark, Scala
  • Experience in Spark/Scala, pig, Oozie, streaming, No SQL DB
  • Experience in writing solid unit tests and integration tests
  • Expertise and deep understanding on designing and developing Big Data Pipelines
  • Exceptional understanding of SRE, SLA, SLI, Resiliency, etc.
  • Product/Platform centric view on things. Customer centric approach
  • Ability to create operational dashboard, run retro analysis on issues, etc.
  • Proven ability to perform POCs or run end to end operations for an emerging platform/Migrations/Upgrades
  • Willing to learn Cloud Technologies, big data technologies and develop technical depth in quick timeframe
  • Available for rotational weekend support
  • Ability to work independently in a fast paced, agile environment

Preferred Qualifications:

  • Experience working on a Big Data Engineering Data Pipelines
  • Experience operating large-scale data systems
  • Knowledge of modern DevOps practices, including automated testing, continuous integration, automated deployments and CICD pipelines
  • Understanding of IAC, GitHub, Jenkins, etc.
  • Understanding or work experience on Snowflake
  • Opensource contributions to distributed systems
  • Ability to work with vendors to resolve issues (or) find workaround solutions
  • Proven to be highly productive, self-starter and self-motivated
Apply for Tigi HrApply for Genpact
Apply for IBMApply for Accenture

join our Official YouTube: Click Here

Join our Official LinkedIn: Click Here

Join our Official Twitter: Click Here

LEAVE A REPLY

Please enter your comment!
Please enter your name here