This podcast is sponsored by our partner GSI Technology, and our guest this week is their Lead AI Scientist Daphna Idelson. Daphna has a Computer Engineering degree from the Technion Israel Institute of Technology and extensive industry experience in specialised video processing, deep learning algorithms, CNN, distance metric learning, and large-scale similarity search.

At GSI Technology, Daphna applies her expertise to ground-breaking power/performance solutions with the Gemini Technology and Associative Processing Unit.

Topics explored include:

  • Daphna's Current Research Focus
  • Recent Findings in Accelerating Similarity Search
  • An Overview of the Radar Spectrogram Classification Challenge
  • Predictions for Advancements in CV and Image Processing
  • Advice for Those Starting Out in the Field of AI
  • The Challenges Faced Being a Women Working in AI
  • The Positive Impact of Having Role Models in Your Career

🎧 Listen to the podcast here.

Nikita RE•WORK [1:00]

Hi, Daphna. Welcome to the RE•WORK Women in AI Podcast. You're currently a Lead AI Scientist at GSI Technology. Can you just kick off by explaining a bit more about your current research focus?

Daphna [1:12]

Yes, sure. First of all, thank you for having me. And yes, so generally, I research machine and deep learning algorithms as part of providing end to end solution projects that combine the APU, the APU is GSI's power processing unit. More generally, I investigate how trends and use cases in the world of AI can most benefit from the APU. Lately, my interest is focused on topics such as face recognition, visual content, search and similarity search, applications that can be defined as the search applications or applications where the predictions or decisions are based on distance measurements between items. So as part of that, I focused my research on zero few-shot learning problems using distance metric learning methods. The idea is to train neural networks to create representations vectors from raw data such as images that are optimal for content-based retrieval tasks such as visual search. More recently, I've also been working on algorithms for approximating similarity search. And I developed a method for converting feature representation vectors into binary codes for searching more efficiently in large databases.

Nikita RE•WORK [2:42]

And I wanted to just dig a bit deeper into that topic. So I saw that you recently presented a talk on accelerating Similarity Search. Can you just give us a bit more of an overview of that presentation and also your research findings there?

Daphna [2:57]

Yes, of course. So just last month I presented a short talk at BayLearn, BayLearn is a bay area machine learning symposium. The presentation was about a method I developed for approximating nearest neighbour using binary hashing and I will try to explain this. In the field of content retrieval by similarity, for instance, if you want to find shoes that have the most likeness to an image you have within a large database, such as eBay or Amazon, or you want to recognise a face from a large database of faces, but these tasks are usually handled by using models that convert the raw data like images into vectors or set of numbers that represent the main features of the item. And then using distance measurements between vector items to retrieve the most similar item within the database. This method is commonly known as the k nearest neighbours search. Today, since databases are becoming very large, containing millions, even billions of items, there is a need for approximating this search. Meaning, in order to reduce memory print, is required to represent the database and to reduce the time it takes to compare between billions of items. So one method of approximating is converting those feature vectors into binary codes, allowing reduction of memory usage and requiring only simple bits of operations to make a comparison. Naturally, though, this can also reduce search accuracy by quite a bit, so managing to convert big codes whilst still maintaining local distances between different items, that's a challenge. And in this work, I presented a neural network-based method for this type of conversion, by training a neural model to maintain relative distances of items with a sampling method. So this method improves the accuracy of the approximation by quite a bit compared to other methods we tested on several benchmark data sets and on real-world databases in some of GSI's project. So this allowed us to stack many more items in one apu card. The paper will be uploaded soon to arXiv and hopefully also will be available as an extended abstract or maybe a recorded presentation at the BayLearn site, if anyone will be interested in more details.

Nikita RE•WORK [5:33]

Definitely, thanks for that overview. And once it's published, we'll definitely share it with our network, and we can add it to the podcast notes as well for our listeners. And so what are the next steps for that research?

Daphna [5:46]

So currently it's already in use integrated into some GSI's projects, but there are several directions that I still wish to further investigate, so to enhance its capabilities. One direction is to, rather than converting the features into binary codes, refute the method as a converted to quantized feature vectors, meaning two or more bits per feature. So it will complicate search computation and increase search time to some extent, but also will allow more distant scoring possibilities, and by that, there will be a more accurate result. So this will require modifications to the optimization scheme. And another direction is adapting the method to a hierarchical search scheme. Meaning that rather than performing an exhaustive search on the entire database, we could find, using a variation of this method, good representatives of the database and search first through them, a much smaller set, and then after finding the closest representatives to query item, only then you can search on the assigned database items. And by that reducing significantly the number of comparisons.

Nikita RE•WORK [7:15]

And something else that I know that you've been working on recently, as you've taken part in the Radar Spectrogram Classification Challenge, which is hosted by the Israeli Ministry of Defence R&D Directorate. Are you able to share just another overview of that with some of our listeners that might not be as familiar with that, and also, what was the outcome?

Daphna [7:37]

Yes, of course, this was actually a really fascinating experience. The R&D Directorate of the Israeli Ministry of Defence, launched an open competition for target classification in Doppler pulse radar signals, more specifically to be able to distinguish between humans and animals in radar signals. So the provided data included real-world radar tracks of animal and human targets, detected by several sensors at different locations. And the goal is to succeed in generalising a correct prediction on new sensors and new locations. Now, such radio signals actually, what they do is they record movements and the difference between targets lie within the micro-doppler changes, meaning small changes caused mainly by movement of the limbs, the arms, legs, tail, and such. Not only by that but also by background clutter, such as leaves, grass, and weather conditions, making it an exceptionally challenging task. So it was even more challenging due to the very small and unbiased data set for training and validation. So anyway, such radar signals are commonly transformed into spectrograms. Spectrograms are visual representations of frequencies of the signal or signals over time. So while it may be an image that is unrecognisable to an untrained eye, it may still be solved apparently with classic CNN models or image classification models. So with quite a few tweaks and tricks, especially in the processing of the data as its augmentation and sampling, tuned specifically for noisy frequencies and unbalanced data, we achieved pretty good results. There's an elaborate description of GSI's medium blog. But anyway, so I entered this competition rather late and with no background in radio signals, so I didn't come with very high expectations. Still, with a lot of willpower and I worked around the clock with the help of colleagues, and we actually won the challenge taking first place from thousands of contestants. So that was really cool.

Nikita RE•WORK [10:09]

That's fantastic. That's a great achievement. And I guess, kind of stepping back a bit and looking a bit more at the wider picture. So looking a bit further ahead for the next year or so, what would you say are your predictions for advancements in computer vision and also for image processing?

Daphna [10:32]

I think it's such an unpredictable world that it’s hard to say. But yes, we do alert ourselves to the new future and I'll try to gamble a bit, I'd say that, first of all, content-based searches and recommendation systems will further evolve using multimodal networks. For example, improved image understanding combined with high-end NLP, natural language processing functionality. So you will be able to search for images by using complex textual descriptions. For instance, be able to search through your photo album or Google for a specific image by a detailed description such as Tom laughing at a joke, Sam is talking around the dinner table at David's wedding. This is one thing. Also, I think face recognition and personal identification applications will I believe further improve, use more commonly for surveillance and commercials. That being said though, I also think that the ethical aspect of AI will grow stronger, picking on topics of privacy and regulations, and also on bias and fake news, something we already began to see in the past year with a discussion on racial and gender bias that is being incorporated into machines. So we'll see how to combat that within the models within the data itself because the advancements in this field are really amazing. But as they say, with great power comes great responsibility. I do think we need to tread carefully and be aware and try to avoid negative outcomes.

Nikita RE•WORK [12:16]

Yes, definitely some great points there. And quite a few of those have been reflected in other conversations that we've had from some from previous podcast guests. So it'll be really interesting to see what does happen in the next few months. It's very hard to predict anything at the moment after experience in this past year. But yes, we'll be fascinated to see how AI research is impacted, if at all, by this current pandemic that we're in. And something that a lot of our listeners will be very interested to hear more about is what prompted you to begin your career in AI, as quite a few of our listeners often get in touch to ask if we can just share more details on how a lot of our AI experts get involved, and a lot of them are at the beginning of their career. So the great if you could just share with us a bit more about that.

Daphna [13:09]

Yes. I say that the defining moment of my career actually begins with what made me decide to study computer science, more specifically, computer Engineering. So in high school, I did not major in science at all. I had a knack for maths, sure, but we did not have any computer science programme and I actually majored in art, which I love to this day. And the first time that it crossed my mind and actually settled it, was when I saw my dad install a graphics card in his computer. And I was so fascinated by all the bits and bobs and connecting lines, it looked just like a perfectly organised chaos. So that's the moment I really decided, I really must learn how on earth this small part of clutter can form such an intelligent machine. So yes, and then during my studies at the university, I met with the fields of image processing, computer graphics, and machine learning later on computer vision, which really fascinated me, mainly thanks to the creativity. I love art and I love creative thinking, so this visual and I could say, even artistic way of thinking in computer vision and image processing, that's what attracted me so many years ago. So creating an intelligent computer programme that can see, learn, and understand, again it's on the inside and it's amazing. You never know what to expect when you start projects, what insights you will gain, and what capabilities your algorithm will reach and I love that. I did not expect that years later with the rise of deep learning and the fusion of all the different data science fields, AI will become all the rage. We are making magic.

Nikita RE•WORK [15:09]

It's so interesting to hear everyone's individual paths into their current roles and I quite often on this podcast series and also, some of the speakers that we've previously had RE•WORK events, it has often come through that creative background, often through arts or even music. So yes, it's really interesting to see how that develops into working in this current field. And in terms of, I guess, advice to some of our listeners, is there any specific advice that you would give to somebody that is looking to start or move their career more into AI?

Daphna [15:47]

Yes. I think what we're experiencing today, at least in Israel, in a high-tech nation, is a flood of demand for both jobs and employees. But ironically, still, a difficulty filling a large gap between them. There is a very high demand for good people in AI, and a lot of people want to work in the field because it's considered prestigious, it's rapidly progressing, it has a lot of appeal to it. And what's been happening is that because there's such high demand, people want to get there quickly, with as many shortcuts as possible. For instance, using one of those many available crash courses or seminars oriented specifically for deep learning, which are great. But this is problematic because AI, I think AI is a field that you should really have to mature into for experience. You have got to have math, statistics, linear algebra, have knowledge and research experience in either computer vision, NLP, or any other form of machine learning and data science. So these shortcuts, while it doesn't reach the field with new blood and new people coming in. At the end of the day, I would recommend anyone who's really interested in pursuing a career in AI, to actually take the time to study and gain experience in research before actually aiming for a position as an AI specialist. Now, on the other hand, we often find that people who have been studying for a great deal of time, dedicating it mostly to research and pursuing PhDs and postdocs, that's amazing. But they might sometimes lack the practical dimensions of the job, of turning theory into a working product or having good programming skills, knowing how to handle large real data. So my recommendation would be, unless your goal is to be purely research-oriented, my recommendation will be to find a balance between specialising within the field but staying career-oriented and avoid neglecting good practical skills. By the way, to make things clear, deep learning courses, some of them are really great, and it's a great way to complete your deep learning training, I did so myself. But I think that they should be considered as a final stage of a long journey. Final tip, always try to keep yourself updated with the AI community forums, meetups, conferences, etc.

Nikita RE•WORK [18:33]

That is fantastic advice. And I think that that balance between the theoretical background, but also have enough practical experience is I mean, it's central for probably most jobs, I would say, and it's certainly true for the sector. So yeah, thank you for sharing that. And as you know, this is a podcast based around fantastic women working in AI. And as a woman in the field yourself, have you faced any challenges? Specifically, being a woman working in tech and working within the AI sector?

Daphna [19:12]

So those who know me might say that I can be defined as a raging feminist. So as such, I am very aware of gender disparities, and I see them even in the most subtle expressions. Of course, sometimes the expressions are quite blunt. I cannot tell you how many times I've been confused about the secretary. But most often what you'll encounter is the subtle expressions are the bigness disparities, like being taken by a few just a little less seriously, or simply just being one of the very few women at the conference, listening to talks, given almost exclusively by men, reading papers written by men, looking up to the masterminds of the field, mostly men, so you cannot help but feel, at times, like an outsider. And this scarcity of women in the field is I suppose, partly due to the way women are socialised less towards curious in science and technology, so much that it can be very difficult to overcome even your own prejudice. So throughout your life, you must definitely get the feeling that this is a man's world. And this is generating in STEM fields. But I think the issue currently seems even more severe in AI. At BayLearn, I heard a fascinating talk by Timnit Gebru, I hope I'm pronouncing her name correctly, and she's an advocate for diversity in AI. And she talked about the hierarchy of knowledge. And one specific related topic she mentioned that I found enlightening, is how the moment of field proves itself to be powerful, it becomes dominated by those in the position of power. And I think this is what we've seen in computer science in general, which surprisingly, has actually seen a decline in women's participation over the past few decades. And I think is today particularly reflected in AI. So, to conclude, I believe women still often struggle with insecurities, a stronger tendency to have imposter syndrome, a stronger need to prove themselves. That will take time to overcome, but on a more positive note, I think both women and men are becoming much more aware of these challenges working towards changing this trend, like this podcast. So I am optimistic about the future.

Nikita RE•WORK [21:58]

Well, definitely, I think there's, without a doubt there's been a change or shift in more awareness. Regarding women within the field. That's something that we've seen as we've held AI conferences and dinners and things like that over the past few years, we've certainly seen much more awareness for having really inspiring women involved in what we do and more women nominating and their male counterparts nominating fantastic women to share their research and advancements at our events as well, which is great to see. And you mentioned imposter syndrome there, that's something quite a few of our guests in previous podcasts additions have mentioned as well. And a lot of that then links to how important role models have been for them. Has that been a factor in your career at all?

Daphna [22:51]

Yes, it is, I think, probably the biggest role models are the colleagues I came across and had the privilege to work with throughout my career. For instance, I have a close colleague and friend, we've been working together for many years, and who actually interviewed and took me in as a student 15 years ago, from which I've learned so much all those years and still do. His name is Sammy and he will probably be very embarrassed by this. There's also a very professional and supporting She Learn Community in Israel with some leading names, and I'm awed by their knowledge and devotion to share it. And I also can admit that I'm starstruck by Andrew Ng, which I took their online courses, Yann LeCunn and Fei-Fei Lee, and I think I will add Timnit Gebru now to this list.

Nikita RE•WORK [23:54]

Yes, some fantastic people to look up to. And a lot of those that you mentioned, have a lot of resources that are available online to watch whether it's courses or talks or things like that, as well. So yes, definitely would recommend having a look for that as well. And well, thank you so much, Daphna. It's been really interesting to hear a bit more about not just your current research at GSI and some of the recent achievements that you've had, but also a bit more about your kind of background and your roots into where you are now. And I think it'll be quite inspiring for a lot of our listeners out there that are perhaps thinking about how they can get into the field and how they should go about that as well. So thank you so much. It's been great having you as a guest on our podcast today.

Daphna [24:38]

Thank you.

Nikita RE•WORK [24:39]

A huge thank you again to GSI Technology for sponsoring this week's podcast, and supporting women in AI, and also to Daphna for taking the time to chat with us. If you're keen to learn more about GSI Technology then please do at their website in the links below. We've worked with GSI Technology since 2016 and it's been fantastic to partner with them over the years. We've also recently collaborated with them on our white paper on Leveraging AI for Pandemics, which you can download for free on the white paper tab of the AI library. Until next time, take care.

Founded in 1995, GSI Technology, Inc. is a leading provider of semiconductor memory solutions. GSI's resources are focused on new products that leverage the strengths of its market-leading SRAM business. The Company recently launched radiation-hardened memory products for extreme environments and the Gemini APU, a memory-centric associative processing unit designed to deliver performance advantages for diverse AI applications. The APU's architecture features massive parallel data processing with two million-bit processors per chip. The massive in-memory processing reduces computation time from minutes to milliseconds, even nanoseconds, while significantly reducing power consumption with a scalable format. Headquartered in Sunnyvale, California, GSI Technology has 172 employees, 114 engineers, and 92 granted patents.Learn more about GSI Technology and their advancements via

Are you interested in reading more AI content from RE•WORK and our AI experts? Read these articles below: