Speaker

Tejas Chopra

picture of Tejas Chopra

Tejas Chopra is a Senior Software Engineer, working in the Data Storage Platform team at Netflix, where he is responsible for architecting storage solutions to support Netflix Studios and Netflix Streaming Platform. Prior to Netflix, Tejas was working on designing and implementing the storage infrastructure at Box, Inc. to support a cloud content management platform that scales to petabytes of storage & millions of users. Tejas has worked on distributed file systems & backend architectures, both in on-premise and cloud environments as part of several startups in his career. Tejas is an International Keynote Speaker and periodically conducts seminars on Software Development & Cloud Computing and has a Masters Degree in Electrical & Computer Engineering from Carnegie Mellon University, with a specialization in Computer Systems.

Short talk

Going from three nines to four nines using Kafka

Many organizations have chosen to go with a hybrid cloud architecture to give them the best of both worlds: the scalability and ease of deployment of cloud, and the security, latency & egress benefits of local storage. Persistence of data on such an architecture can follow a write-back mode, where data is first written to local storage, and then uploaded to cloud asynchronously. However, this means that the applications cannot utilize the availability and durability guarantees of cloud, and the availability of storage is the availability SLA of on-premise hardware, which is almost always less than the availability SLA of Cloud. By switching the order, i.e. performing uploads to cloud, and then hydrating on-premise storage, applications get the benefit of availability SLAs of cloud. In our case, this allowed us to move from three 9’s of availability (99.9%) of local storage to four 9’s (99.99%). Instead of uploading in write-back mode, we duplicated the incoming stream to upload to both cloud and on-premise. For on-premise uploads that failed, we leveraged Kafka’s event processing to queue up objects that need to be egressed out of Cloud into the local storage. This architecture allowed us to hydrate the local storage with objects uploaded to Cloud. Furthermore, since local storage space is limited, we periodically purged data out of local storage and created a secondary copy of the data on cloud by leveraging Kafka event processing.

Talk

Using Kafka to solve food wastage for millions.

The food wastage in India is 70 tonnes per year, and there is mismanagement at several layers. Approximately 20-30% of the wastage happens in the last mile, between wholesale traders, and retail mom-and-pop stores. Is there something we can do about it? This was the problem statement I attempted to solve as a first engineering hire at a startup. Our customers were 12.8 million retail owners that deal in FMCG (Fast-moving consumer goods, such as food grains, tooth paste, etc.). The goal was to develop a platform for retail traders (mom and pop shop owners / small and medium business owners) to buy FMCG products from wholesale traders using an Android app. We used cloud extensively to develop micro-services for a section of the population which is not very well versed with smartphones and technology. In this domain, data is very unstructured and we had to come up with challenging ways to representing this unstructured data for a slick browse and search experience. We used Kafka as the event streaming platform in conjunction with DynamoDB, Elastic Search, and Redshift to build pipelines and systems on Cloud. Data ingested by sellers in DynamoDB would be transformed and published in ElasticSearch & inserted into Redshift tables for analysis. I'd like to delve into these details as part of this session.