Caio Davi
Aug 26, 2022

Thank you!

It runs in GPUs as far as the TF framework is concerned to. But it have a few limitations. It can't distribute the training over multiple GPUs, for example.

The distributing strategies from TF are very "gradient-descent based". This is something I want to work on next. Although the PSO algorithm has a high volume of communication between particles, and thus a major handicap for a distributed architecture, I think we can find a suitable approach to minimize this.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Caio Davi
Caio Davi

Written by Caio Davi

Ph.D. student @ Texas A&M University

No responses yet

Write a response