s/guide/advisor

I came to the campus for the first time since the start of my program. In the process I met some of my batchmates and my advisor as well. At this point, I was the sole student on the influence maximization project from the previous semester that would eventually get accepted to WSDM. I continued to push along slowly, trying to get results on various baselines and comparing them to our method(s). We had missed the previous conference deadline in May (due to the passing of my grandfather, among other things) and I wanted to make sure we wouldn’t miss the next one. After 3 weeks, I decided to head back home and continue working from there.

I came back to campus on July 19th, and switched hostels. I had gotten some solid results in the past month and my advisor (and our collaborator) decided to submit to a conference. I also got my first taste of illness and the on-campus hospital gauntlet, having been put out of commission for 4-5 days. I registered for combinatorics (CS604) this semester, with the goal of improving my proof construction, having struggled wth proofs of submodularity (and gamma-submodularity) in the research project back in April. While I did gain some intuition on proofs and proving methods in different scenarios, I wasn’t really that good at it.

As an aside, a **short description of CS604**: the course, run by Prof. Sundar, was divided into three segments - graphs (use Douglas West’s book), hypergraphs/set systems and discrete probability. Stasys Jukna’s Extremal Combinatorics was the main book for the course. Each segment had an assignment associated with it, and we had try to prove the results by ourselves, and use lecture videos after some attempts. There was no real “instruction” in this course - it was meant for solving problems and then discussing solution approaches in class. Three quizzes were conducted and two sets of homework problems (not really related to the topics at hand) were given out. There was no mid-semester exam, and the end-semester exam held an ominous 50% overall weightage. Take this course only if you’re theoretically inclined and/or have the time to make up for it by trying out many proof based questions from books/other material.

Things had gotten heated up in the lead up to the conference deadline. We were initially chasing AAAI, but then decided on WSDM for a more topic-targeted submission. My birthday came and went quickly in the midst of work (grateful to my friends for celebrating it regardless). I also had a rather unpleasant TA experience close to the deadline but managed to weather it. We finally submitted the paper very close to the time deadline and I took some well-deserved time off after this. Later this month, I housed my parents over at the IITB guest house when they came to the campus for the first time, which felt amazing.

I was put on a project with an applied scientist on the research division of a well-known e-commerce company. The original idea involved improving recommendation systems with the help of a subset selection algorithm. However, I came to realize that the codebase left to us was not in a very workable state, and was hard to understand. We eventually pivoted to a method based on a mix of curriculum learning and matching networks, both very interesting ideas in their own right. We planned to submit to WWW the next month, but as fate would have it, we could not continue due to administrative issues. For what it’s worth, it was a good experience and the AS was very helpful and guided us well.

The most notable incident from October (or what should have been, anyway) was the notification of acceptance to WSDM. What should have been a month revolving around celebrating the acceptance and putting my head down in other projects, instead became a month of neurochemical mind trickery and intense emotion. I’ll spare the details on the weird experience, which lasted into the end of the year. The semester concluded in November with the end-semester exam for CS604. Some of my batchmates and I also applied for Google Research Week, which was due to happen in January.

My guide and I had discussed more ideas for my thesis, including an adversarial flavour to influence maximization, but I did not progress much at the time. Eventually the time came for my BPS (in November), or semester-wise assessment of progress towards my thesis. My committee consisted of two professors from the CSE department, both working in varying degrees of theory. I presented my WSDM work to them and explained influence maximization in brief. In return, the professor who worked in game theory suggested ideas for opinion-dynamics based influence spread in social networks. With this, I went back home at the end of the month.

Other than attending weddings and wedding-related events this month, I also got the chance to attend the IndoML symposium at IIT Gandhinagar. It took place over the course of three days where a number of speakers from various institutions delivered talks. A poster presentation was also conducted on the first day, where (mostly) PhD students (some from outside India) presented their work done so far. I met with my guide here as well, who gave a tutorial on submodularity alongwith a collaborator professor from UT Dallas. I returned to campus after this, and got some work done. Mood Indigo was a refreshing change at the end of the month - I attended a Sunidhi Chauhan concert for the first time too. Some of us also received an accept notification for Research Week around this time.

I was parachuted (drop-shipped?) by my guide into a project on learning resource-efficient mixture models for clustering on the first day itself. The goal was to carry this project to ICML, whose deadline was at the end of the month. I was given the responsibility of the experiment pipeline and was to obtain results on a completely different set of experiments than what was originally thought out for this project. This we did over the course of the month, and I had help from one of the first-authors of the paper, to whom I am grateful.

I registered for **CS728**: Organization of Web Information, which happens to be a NLP research course *organized* (heh) and run by Prof. Soumen. Prior experience in NLP is not mandatory but recommended, and that in machine learning/deep learning is preferred. In the initial classes classical sequence models were discussed, progressing to word embeddings and representation, basic neural models for sequences, and sequence-to-sequence models and attention. We also covered many transformer-based models. In later classes we covered applications such as NER, knowledge graphs, knowledge-graph guided question answering, KG representation, multi-task language models and large language models in general. Oh, and one class was devoted to graph neural networks (which I am glad for, because that one fun question on GNNs in the end-semester exam really set me up). I was lucky to have two batchmate friends who were already pretty good at NLP with me in this course.

We submitted the ICML project close to the deadline (again), and then I went with one of my friends to attend Research Week in Bengaluru. We were put up by Google at a hotel near the venue. There were talks by the heavy-hitters of their respective fields, and I had insightful conversations with researchers such as Partha Talukdar and Prof. Balaraman Ravindran during socials. We were quite grateful to Google for their hospitality and expert management of the event(s).

By this time, my e-visa application to Singapore was ready and I submitted it to the travel agency. I worked with multiple professors and the CSE department to obtain leave permissions around the time of the conference. With a small detour to Kolkata for a family function, I headed to Singapore that Sunday to attend WSDM.

My experiences at Singapore warrant a post on their own, given that it was my first time outside national borders. It was fun in general and I made a few friends at the conference. My paper was due to be presented as a poster on the second day, and I had to explain (and defend) our design decisions in front of other people for two hours. It was a surreal experience. If you’d told me three years ago that this is what I’d be doing in the future, I would not have believed it. I also attended some workshops and tutorials on the following days and ice-breaker events run by the organizers. While my guide did not attend, the collaborator professor from IIT Delhi did, and I met with him and his student. His other paper even made it into the WSDM top 10, which was pretty cool.

I was back on campus by the first week of March, with another minor detour to Bengaluru to complete the family function circuit.

In short, these months passed mainly in course related stuff (quizzes, end-semester exam), discussion of other projects, presenting my WSDM poster at a department research symposium and participating in the ICML rebuttal. In April, we received news of the accept, which was a pretty big deal.

Now I’m back on that multi-project grind. More for a later post!

]]>Run by Prof. Sunita Sarawagi and truly an advanced level course. You may come across other reviews of this course by undergrads on their respective DAMP websites. You will find that they mention that the course requires a significant amount of effort to keep up - and having taken it, I believe this to be true on average. The instructor covers probabilistic graphical models and some generative models in the first half, and interesting topics from recent deep learning papers in the second half (Gaussian processes, energy based models, causality, normalizing flows, time-series forecasting with Gaussian copula, etc) plus sampling. Classes were taken on Teams, and there were also some interesting guest lectures by researchers from the industry (think Google Research etc).

There was one theory assignment, weekly quizzes which could be taken only once and had greater weightage than the assignment, two exams and one course project. There were sample questions for almost every lecture, but understanding how the answers came about took a while initially. Questions in the quizzes, assignment and exams did require some thought, in increasing order of magnitude. One would greatly benefit if they possess a good background in probability before starting this course. Some knowledge of graph theory and deep learning basics would also be helpful. Form a study group with atleast one good student in it so that you can discuss whatever was taught and clear your doubts. Take conceptual help from the TAs too - I want to add that Lokesh, a PhD student in the instructor’s group, was particularly helpful to me.

For the PGMs portion, there is enough material available online (some of which I can recommend are Stanford, CMU, this helpful resource on d-separation by MIT, the Koller-Friedman book and any other material one can google on Bayesian networks). For generative models like the VAE and GAN, the prof. provided us with lecture notes. For the second half, too, we were given lecture notes, resources, and sample questions crafted by the instructor.

The project was an interesting experience. I got to choose a topic that interested me, which I proposed to my team and then led the charge. This was a first for me wrt ML projects. We attempted a modification of this ICLR workshop paper on graph energy based models, where we tried to learn one part of the GEBM architecture with a linear-additive neural network. We had to then defend our design choices during the final presentation for the project.

Everything I want to say for this course has already been said in earlier DAMP reviews. Note that a sizeable number of undergrads took the course this time, and this affected grading. Take the quizzes seriously and don’t dip on any of the exams, or your final grade may be affected.

Run by Prof. Ganesh Ramakrishnan, earlier CS709. This course is a bit different from standard convex optimization courses, in that it reduces coverage of KKT and duality in favour of submodular optimization in the second half. The first half covers convexity theory and the linear algebra and multivariate calculus involved, upto the analysis of vanilla gradient descent. The second half largely covers the various kinds of gradient descent, upto projected and proximal gradient descent, makes a detour to cover KKT and its relation to the proximal operator (with an aside on the duality gap), and then dives into submodular optimization. Classes were taken on Teams. Since I had come across submodularity in prior research work/readings, I was interested in taking this.

This course forced me to practice and get better at multidim functions and calculus, and see ML functions in that light, for which I used a set of notes provided by the instructor as well as this book, particularly the vector calculus chapter. I’ve seen mild complaints on previous iterations of this course alleging that the instructor spent too much time on the math basics. It did not feel that way in this iteration, although they did emphasize on solving questions (each lecture had ungraded HW questions). I also used various online resources, such as Stanford etc. and the book by Boyd and Vandenberghe for topics I had doubts in. The Moodle for this course was packed with content, including lecture videos from previous iterations of this course, the instructor’s notes, and other resource pdfs. The course consisted of two assignments (both included programming questions), two exams, two paper readings (report + ppt) and a course project (report + ppt + code). Questions in both the assignments and the exams required a bit of thought but were well-made and focused on application of concepts.

In the course project, I proposed a re-implementation of this paper on document summarization. My teammate implemented a performance review on multiple hyperparameter configs and I tried to implement a deep submodular function for doc summarization. The latter didn’t work out, but I documented what I tried and where it failed. This helped us a lot during the presentation, which the instructor himself took and asked many questions.

This course covered a lot of material and you have to put in some time to get a hang of everything, similar to 726. Taking both 726 and 769 together probably wasn’t a great idea in hindsight, because there was a lot of pressure in the last month with the end semester exams, presentations, research work and seminar. But if you feel you can handle it, you end up gaining a lot of knowledge and maybe even some ideas for your future research.

Run by Prof. Varsha Apte, and is a core pass/no-pass course with separate allocation of credits. I’m a bit conflicted about this course, mainly because of the amount of time it demanded in an already packed semester. The time requirement is similar to or maybe a little less than CS699 from the first semester. I did learn a few things, such as making worthwhile presentations, how to write survey reports, how to read papers, how to look up whether a conference is worth your time in your field, etc. so I cannot completely discount its usefulness.

People tend to take this course lightly and there have been reports of plagiarism. Don’t do that. There were warnings from DDAC (disciplinary action committee) and if a report goes to them, you will end up in trouble. My batchmates and I never plagiarized but also didn’t think that these warnings would be anything more than that. Turns out that there are some people who got reported, and I don’t know what became of them. There are also people who didn’t follow the guidelines for the final mini-seminar talk/presentation and ended up having to retake the course in the summer, which is a bit painful.

I continued with my guide from the previous semester as we explored a variant of the earlier problem. We had a different undergrad on the project this time and him and I collaborated a lot. I got to see what the full lifecycle looks like, from wrangling with theory to coding up a full experiment pipeline and was a bit more confident than last time.

Note that your research work will clash with your coursework and you either have to take courses that aren’t all heavy, or have to be able to handle the madness that ensues. There is also no guarantee that you’ll be ready in time for a conference submission. Many things can happen - the pressure becomes crazy, people may leave, etc. You have to pick yourself up and keep going even when the stream is going against you.

It helps to implement ML models and other algorithms by yourself as a way of supplementing your background knowledge.

As before, this course culminated in a project presentation in the presence of an external professor, and the submission of a report.

A core course that is common to MTech and MS students. You are supposed to pick a guide to explore your future thesis with, and the seminar is supposed to be a literature review for this purpose. In my case, I read up on literature broadly within the scope of information diffusion. I discovered a few new interests I didn’t think I’d have, esp. the field of geometric deep learning and the effect of geometries on ML architectures.

My guide was the same as my RnD guide, and we had weekly meetings where I would present a bunch of papers that I had read, and he would make suggestions about possible future work. We progressed in this manner until he was satisfied that I had covered enough material, and then focused on RnD instead as that took up more of our time.

I had to write a seminar report and make a presentation similar to RnD, and presented it to the same external professor. The things I learned from CS899 helped here during the presentation. Once again, seminar and RnD together could take a toll on your course selection, so be wary of that.

As an ending note, you have to take a minimum total of 58 credits worth of courses in the MS program if you are a TA, so you cannot skip any the five electives given in the program structure. The “*” beside an elective denotes that that elective is mandatory for that semester, and that you can’t move it to another semester. So if you plan to retag some courses to additional learning and vice-versa (for e.g. to boost your CPI), make sure that you have taken enough courses to be able to do so.

]]>It was so bad that I couldn’t run Google Chrome without the machine running into a kernel panic. I switched to Firefox and attempted some optimizations I found on the internet, and while this made the machine usable, the slowness remained. It was particularly bad when on a video call on MS Teams and Meet. Nothing seemed to work. Not even increasing the RAM from 6 gigs to 12 helped.

But I finally made headway a few days ago. I came across this free app by the developers of Titanium Backup, called OnyX, which among other things fixes corrupt system configuration files and “recalibrates” the system. After a reboot, my machine was back to 80-90% pre-Catalina snappiness. A surprise to be sure, but a welcome one!

So if you’re at your wits’ end, you might want to give OnyX a try. Keep the OS rollback as a last resort.

]]>I couldn’t really find anything about him on the internet, and it felt like he had been erased. This post is meant to be a little corner of the internet dedicated to him.

Master moshai, as we fondly addressed him, was a kind-hearted, jovial gem of a person, despite his earlier life circumstances. Travelling to his family home as part of my mother’s sarod recitals was a core childhood memory for me - taking the metro from Tollygunge to Belgachia, and then a rickshaw to Paikpara colony. His home gave off a warm, old-world vibe and I would sit with ma as she practiced in the living room, interspersed with his advice and critique. He was very fond of me, and his daughter gave me the nickname of “chatterbox” because I used to blab a lot as a kid. This was during my school summer vacations, as ma and I used to travel to Kolkata to stay with my grandparents at Naktala.

I used to be spellbound on hearing either of them play classical ragas, and so it was only a matter of time before I had a sarod in my hands too. Alas, I was not a very focused student then, so while I passed the first level examination (also conducted at their home), I never took it up further. My mother couldn’t continue her lessons later either, due to life circumstances. Master moshai passed toward his heavenly abode on 11th July, 2009, following the passing of his wife, an equally sweet lady.

I’ve added below an old profile on his life verbatim. I’ll update this post with anything else I can find on him.

Pranabangshu Mukherjee is one of the foremost sarod players in the nation today. He received his first lesson from Sri Sita Kumar Dutta of Ustad Karamatulla and Kukubh Khan gharana, and he refined his playing under various maestros of the Alauddin (Maihar) gharana. He improved his 'taal' and 'lay' under the able guidance of professor Shamal Bose.]]>

He stood first class first in the All India Competition in 1960 and also received an award in the Inter Collegiate Music festival of 1959. He is enlisted as a sarod player for All India Radio, Calcutta in the 'B' group, since 1981. He gave his sarod recital on Calcutta TV and on many outstation programmes of the AIR also. His programmes in the Sadarang Music Conference of 1975 and others in Calcutta were widely appreciated by then critics and his audiences.

Some other conferences where he was invited to play include the Pune Film Institute, Vishwa Bharati (Shanti Niketan), Sangeet Mandali and Swar Sadhana in Gujarat. He gave his most recognized performance nationwide as part of the AIR Tuesday Night Concert on 9th February, 1988.

He maintains the purity of the ragas in his imitable style. His improvisation of 'alap' and 'jor' mesmerizes his audience. Also noteworthy are his 'laykari' in the 'gat' and 'dhima' portions of his play, and in the 'drut' portion his 'bandish', fast 'tans' and 'jhala' are worth mentioning.

He earned his Sangeet Pravarkar certification from Allahabad Prayag Sangeet Samiti.

There was a lot of stress in the aftermath as well, as some of the questions were heavily disputed on GateOverflow and we didn’t know how many marks we would get. And then there were the tests and interviews once we knew how far our scores would take us. I’m kinda glad I gave myself the experience of taking atleast one national level exam successfully, what with all the thrill and exasperation that ensued. I hope this experience will pay itself off many times in the future and even if not, it’s been worth it.

It took some time to adjust and get past the initial shock of assignments, quizzes and projects. The MS program focuses mainly on research, and in a move not seen in other institutions, IITB’s MS includes a core RnD course in the first semester itself. Juggling that along with course commitments took a while to get used to, since I was completely new to ML. I learned a ton from my peers as well.

Here, I write about the courses that I took this semester and what to look out for. For the MS program, you have to take atleast one elective that is related to your stream - for e.g. CS725 for Intelligent Systems, CS744 for Systems, and so on. The rest can be as per your choice. The TA and RA/RAP program structures are different and can be found in the MS rule-book.

Run by Prof. Rohit Gurjar. Covers basics, divide and conquer, greedy algorithms and dynamic programming in the first half. There is an emphasis on proofs in the greedy portion and intuition on how to frame the problem in the dynamic programming portion. Multiple different problems are covered in divide and conquer (e.g. integer squaring) and DP (margin slack). In the second half, covers some parts remaining in greedy/DP and substantial ground in bipartite matching (taxi scheduling/augmenting paths), randomization algorithms (Karger’s min cut, reservoir sampling), k-approximation algorithms (load balancing) and complexity classes/reduction. Classes were taken on Zoom/Webex.

There are two theory assignments and two programming assignments, one in each half. All of them have interesting problems and may take days to solve fully. The midsem and endsem exams were proctored remotely using CodeTantra and submissions made on SAFE. Both exams were divided into two halves and a 10min break given in between. The questions consist of scenarios related to what is taught in class - if you’re able to understand what is being asked, you’ll do fine.

Pace of instruction is slightly slow but this is not an issue with recordings available. Prof takes doubts in the class and is available on Teams for discussion. He was very helpful in resolving some cribs I raised in both exams.

I took this course as **additional learning**, meaning that while its grade appears on the transcript, it does not count in the final CPI. I might retag it later (we have the flexibility to do that later in the program). Take this course if you’re interested in algorithms or you feel that it may be relevant to your research.

Run by Prof. Preethi Jyothi. Provides an introduction to several key topics in ML. Starts with basics in linear algebra and probability, and then covers ML basics (bias-variance tradeoff, regularization, MAP vs MLE, basis functions), linear and logistic regression, Naive Bayes, perceptrons and decision trees in the first half. Covers deep neural nets, SVMs, kernels, K-means clustering, dimensionality reduction (PCA, LDA), Adaboost and applications in the second half. Misses out on Gaussian mixtures, bagging and random forests which can be covered separately.

Books and online resources may be needed for a deeper understanding in some topics, as this is an introductory course. Prof provides supplementary material for every week’s topics and that was very helpful. For e.g., Nielsen’s book for neural nets and the backprop process, Mitchell’s book for decision trees, Bishop and ESL for multiple ML concepts, Andrew Ng’s notes for SVMs and kernels. There was one programming assignment on linear regression and gradient descent and one course project, to be done in teams (which my team used to explore GANs for doodling).

There were proctored quizzes in both halves that were harder than the corresponding exams. Some amount of mathematical maturity is needed (calculus, probability and linear algebra), but that should be expected in any serious ML course. A good team can help you sail through the practical parts of the course and I was very fortunate to be in one. If you’re an absolute beginner to ML, you will struggle a bit but it will all come together with some persistence. One thing I noticed my more knowledgeable peers do is try to implement every concept that they were taught and I will try to imbibe this.

Prof was very helpful and always available on Teams for discussion. I resolved some of my doubts, for e.g. on Mercer’s conditions for kernels this way. I took this course as an elective, but it was mandatory due to my choice of MS stream. It is also a prerequisite for CS726.

Run by Prof. Soumen Chakrabarti. Covers a wide variety of topics related to information retrieval, and although I’ve linked to an older version of the course, it gives a fair idea of the syllabus. I took this course because of the graphs component and mild interest in the other components, but that is only one small part (if you are interested in graphs and have a bit of ML experience, take CS768).

Lecture material often touched on relevant research papers, for e.g. in topics such as learning to rank and graph analysis. It may feel somewhat disconnected from the bookish basics of IR which are tested in quizzes and exams, but the topics covered are interesting in their own right. Students are implicitly expected to cover the meat of the stuff from books (Stanford’s IR book being most relevant, and Bishop’s ML book being the next). Some other resources I found relevant were Stanford’s Mining Massive Datasets book (for LSH) and Barabasi’s networks book. Online resources were also important. It took me a while to realize all this though.

There were two programming assignments, one on coding schemes (Gamma, Golomb, arithmetic) and one on Naive Bayes text classification using multinomial and Poisson models. It’s worth noting that on the second assignment, I had a valuable back-and-forth with the prof on email and he was very encouraging and gave me tips to deal with some implementation problems that I was facing. He even showed interest in the results that I obtained and these little gestures stay with you long after the course is over.

All this being said, take this course only if you’re already comfortable with the IITB system and have sufficient mathematical maturity (or equivalently, have a decent background in machine learning). I’ve noticed grad students (including me) struggle with this course and senior undergrads having the time of their lives. I say this because the exams - and there are *three* of them - involve interesting math/ML/datastructure questions where those who have been solving such problems in other courses already greatly benefit. Those who have implemented a bit of ML (played with tensor ops atleast) also benefit. This may not always be true for fresh grad students. If you do take this course, make sure to request the TAs for past exams for practice. Questions never repeat in these exams but concepts may.

I took this course as an elective. It also used to be a prereq for CS728, but that may have changed - you’d have to confirm this with the instructor.

Run by Prof. Kavi Arya, organized by the TAs. Unlike the others, this is a core course with a greater number of credits assigned to it. Covers a variety of topics in software development, scripting, and scientific computing. The variety may seem overwhelming and the need for such a course may even seem debatable (and infact, there has already been a lot of pondering on this in the past). It worked out well for me in the end though, and I did pick up a few skills (like LaTeX).

I should add that I’ve spent some time in the industry and so this course didn’t give me any trouble. Your experience/mileage depends heavily on the TAs. On average, they did their best to accommodate us but this may vary from person to person, so I can’t really make a blanket statement. It seems clear to me now that this course was introduced keeping in mind the large population of recent undergrad passouts who join via GATE. If you join directly from a tier 3 college (I graduated years ago), you may not have sufficient programming experience (CP is not programming experience) or experience with tools.

Includes a course project to be done as a team, and my team made a web scraping project for this. Project is a substantial portion of the grade so it needs to be taken seriously. The lab assignments were divided such that the ones before the midsem were to be done individually, and the ones after were to be done in teams.

This course in addition to others does make your course structure heavy, so plan on the courses you take wisely.

Not really a “course”, but graded like one. Also a core course with greater credits. You’re supposed to pick an advisor in your stream and work with them on a research problem. I went with Prof. Abir De as I’m interested in graph mining, graph ML and network science. As part of the problems we explored in information diffusion, we also collaborated with a prof from another IIT - “we” meaning me, our prof and a senior undergraduate.

I felt completely out of my depth for the first half of the semester and I guess that’s normal for a rookie like me. I tried to gather background knowledge (for e.g., Jure Leskovec’s CS224W) with enthusiasm but initially it didn’t stick as much because I was very new. With time and work (and collaborating with the undergrad), things began to make much more sense.

Final deliverables included a report and a presentation (atleast in my case). The presentation was graded by an external professor chosen by the advisor. Make sure that you know what you’re talking about in sufficient depth and can answer follow-up questions. The external professor was satisfied with my answers.

It’s rather clear that one needs to balance their courses against their research, esp. in the earlier semesters. In the later semesters of the MS program one focuses solely on research so course pressure is not an issue. Had I taken a lighter courseload I could have picked up things during RnD more quickly. However, I don’t regret all the knowledge I gained, all things considered.

Hopefully this helps the next batch of MS entrants!

]]>You may also want to check Gokul’s list of resources.

In this phase, I had to prepare entire subjects from scratch. I had not touched most of these CS books and topics in years, and especially not at the level prescribed by the IITs/IISc. I spent the first 6 of 8 months in making notes and solving half of the GO PDFs, and the remaining 2 months in solving the remaining portion of the GO PDFs. I took only two mock tests near the end, and this turned out to be a big weakness. I followed Bikram Ballav’s “What to Read” PDF in searching for resources to make notes from. I solved PYQs for all subjects, but I specifically mention them in some subjects below because certain kinds of PYQs appear frequently in GATE.

- Data Structures: Mark Allen Weiss for some topics (AVL trees, general trees, code complexity calculation). Cormen for most other topics like heaps, binary trees. GO Volume 2 PDF for solving previous-year questions (PYQs) on C programming and data structures (no better source than this).
- Algorithms: Cormen hands down. Covers almost everything you can think of. Sartaj Sahni for some topics like the greedy knapsack problem. Note that PYQs on this topic are a mix of mathematics (combinatorics, probability) and algorithm design. There are some extra topics like the tournament method of finding min-max that are covered best in GO answers.
- Databases: Korth-Sudarshan for topics like ER diagrams, SQL, relational algebra, a bit of indexing, and concurrency. Navathe for DB normalization. NPTEL videos by Prof PP Chakraborty on some topics like relational calculus.
- Computer Networks: Kurose and Ross for most topics. Tanenbaum for topics in routing and the various layers.
- Operating Systems: Galvin for some topics, Stallings for memory management and a bit of synchronization. NPTEL videos by Prof PK Biswas of IIT Kharagpur for topics such as synchronization, file systems.
- Compiler Design: Mostly the Dragon book. I tried to solve exercise problems by myself and used this Github repository to self-check answers. A background in theory of computation is mandatory.
- Theory of Computation: NPTEL videos by Prof Kamala Krithivasan of IIT Madras. PYQs in GO PDF Vol 2, of which the majority were problems on regular languages, regular expressions, minimal state automata. Prof Shai Simonson’s ADuni videos for decidability. GO for Rice’s theorem and some university PDFs on reduction, although Shai’s videos have a better way of explaining.
- Digital Logic: Morris Mano for making notes and some problems. NPTEL lectures by Prof Srinivasan of IITM. Mostly PYQs for problem solving.
- Computer Organization and Architecture: NPTEL videos by Prof Matthew Jacob of IISc, especially for topics like pipelining and cache memory, although these are
*not*enough. This is a subject that should be covered in depth from standard books and videos. PYQs cover the vast majority of possible question types. - Mathematics: Rosen for discrete mathematics concepts and some solved problems. Erwin Kryszig for Linear Algebra. Sheldon Ross for probability. NPTEL videos by Prof Kamala for group theory and some other topics that have appeared in GATE. Some PDFs prepared by Arjun Suresh of GO for topics like combinatorics, graph theory (conceptual understanding) although I don’t remember where I found them.

I analyzed my 2020 exam and found a number of weaknesses. I wasn’t good enough in both regular problem solving and timed problem solving. I had not taken enough mock tests. I aimed to fix these issues. I found Gatevidyalay to be a good source for which exercise problems to solve in standard books. In the process I discovered that quite a few GATE questions appear almost directly or with little modification from these books. I also took four different test series this time, of which I found GO’s lengthy/tricky tests to be the closest analog to GATE 2021: GO, Applied Course, GATEBook, Testbook. Yes, I did not use any of the popular coaching test series, didn’t see the point. I should have solved more from GO!

Do both weekly subject tests as well as full mock tests in the last few months. GATEBook’s weekly subject tests covered a lot of tricky concepts and used standard sources as well as the old GRE CS subject tests for problems. I am not going to list subject test contributions below.

- Data Structures: Mostly a repeat from last year, focussed on PYQ problem solving this time.
- Algorithms: Mostly a repeat from last year. Covered a few topics like binary search tree range queries from the 2020 paper.
- Databases: Navathe for concepts and problems from ER modelling,
**indexing**(problems come almost verbatim from here!) and a bit of concurrency. Raghu Ramakrishnan for concurrency management and transactions. - Computer Networks: Peterson and Davie for exercise problems in the bottom 3 layers. Wikipedia for application layer protocols. Kurose and Ross, Tanenbaum and RFCs for transport layer congestion control.
- Operating Systems: Tanenbaum for concepts and problem solving in topics like process scheduling, disk scheduling, disk numericals, file systems, memory management (especially multilevel paging). Little Book of Semaphores for a bit of mental mapping in synchronization.
- Compiler Design: A new chapter was added this year: dataflow analysis and optimization, which I covered from the Dragon Book, but mostly from online notes from Stanford’s class (not Alex Aiken) and these notes.
- Theory of Computation: I covered all topics except decidability from Peter Linz, and it made a huge difference in my performance. Solving exercise problems gave me in-depth understanding here. Also used Sipser to study about minimum pumping length. In hindsight, should have solved decidability problems from Linz too.
- Digital Logic: Covered topics such as combinational circuits (especially adders), Booth multiplication, floating point arithmetic from Computer Organization by Carl Hamacher and solved some exercise problems. Rest is a repeat from last year.
- Computer Architecture and Organization: Studied Hamacher for topics such as datapath, cache memory and other memories, DMA, pipelining and solved exercise questions. Used Hennessy and Patterson for some pipelining concepts such as the speedup formula. Georgia Tech/Udacity COA videos to understand dependencies and hazards.
- Mathematics: Solved Rosen exercise problems for topics covered in previous phase. Covered concepts and solved lots of problems from topics such as Trees, Graph Theory, Set Theory, Functions, Relations, Combinatorics (all that balls in bins and generating function stuff). Used Prof Gravner’s notes for practicing probability (lots of good problems here). JBStatistics Youtube videos for probability distributions. Rest was a repeat from last year.

Additionally, MSQs were introduced this year and they raise the bar a few notches. Make sure to study theory **in depth** for subjects like Operating Systems (Galvin) and Networks to prepare for MSQs. I missed out on this.

I didn’t quite get the rank I wanted, but it was enough to apply to MS programs at top IITs.

I focussed on linear algebra and probability. For this, I used Prof Strang’s LA lectures+recitations+problems, and Prof Tsitsiklis’s Probability lectures+recitations+problems (upto lecture 9 - ideally the whole course should be covered). If you are aiming for AI/ML (the IITs/IISc call this area Intelligent Systems), then these two subjects are mandatory.

]]>