Supplementary figures: normalized TF activity with importance contour overlay.Both category means... more Supplementary figures: normalized TF activity with importance contour overlay.Both category means and activies of individual probes are visualized<br>Ilya Kuzovkin, Juan R. Vidal, Marcela Perrone-Bertlotti, Philippe Kahane, Sylvain Rheims, Jaan Aru, Jean-Philippe Lachaux, Raul Vicente<b>Identifying task-relevant spectral signatures of perceptual categorization in the human cortex</b>https://www.biorxiv.org/content/10.1101/483487v1
Supplementary figures: averaged, per-subject and per-area time-frequency importance maps<br>... more Supplementary figures: averaged, per-subject and per-area time-frequency importance maps<br>Ilya Kuzovkin, Juan R. Vidal, Marcela Perrone-Bertlotti, Philippe Kahane, Sylvain Rheims, Jaan Aru, Jean-Philippe Lachaux, Raul Vicente<b>Identifying task-relevant spectral signatures of perceptual categorization in the human cortex</b>https://www.biorxiv.org/content/10.1101/483487v1
Modelleerimine on inimkonna põline viis keerulistest nähtustest arusaamiseks. Planeetide liikumis... more Modelleerimine on inimkonna põline viis keerulistest nähtustest arusaamiseks. Planeetide liikumise mudel, gravitatsiooni mudel ja osakestefüüsika standardmudel on näited selle lähenemise edukusest. Neuroteaduses on olemas kaks viisi mudelite loomiseks: traditsiooniline hüpoteesipõhine lähenemine, mille puhul kõigepealt mudel sõnastatakse ja alles siis valideeritakse andmete peal; ja uuem andmepõhine lähenemine, mis toetub masinõpele, et sõnastada mudeleid automaatselt. Hüpoteesipõhine viis annab täieliku mõistmise sellest, kuidas mudel töötab, aga nõuab aega, kuna iga hüpotees peab olema sõnastatud ja valideeritud käsitsi. Andmepõhine lähenemine toetub ainult andmetele ja arvutuslikele ressurssidele mudelite otsimisel, aga ei seleta kuidas täpselt mudel jõuab oma tulemusteni. Me väidame, et neuroandmestike suur hulk ja nende mahu kiire kasv nõuab andmepõhise lähenemise laiemat kasutuselevõttu neuroteaduses, nihkes uurija rolli mudelite tööprintsiipide tõlgendamisele. Doktoritöö koos...
Reinforcement Learning algorithms typically require millions of environment interactions to learn... more Reinforcement Learning algorithms typically require millions of environment interactions to learn successful policies in sparse reward settings. Hindsight Experience Replay (HER) was introduced as a technique to increase sample efficiency through re-imagining unsuccessful trajectories as successful ones by replacing the originally intended goals. However, this method is not applicable to visual domains where the goal configuration is unknown and must be inferred from observation. In this work, we show how unsuccessful visual trajectories can be hallucinated to be successful using a generative model trained on relatively few snapshots of the goal. As far as we are aware, this is the first work that does so with the agent policy conditioned solely on its state. We then apply this model to training reinforcement learning agents in discrete and continuous settings. We show results on a navigation and pick-and-place task in a 3D environment and on a simulated robotics application. Our me...
Supplementary figures: normalized TF activity with importance contour overlay.Both category means... more Supplementary figures: normalized TF activity with importance contour overlay.Both category means and activies of individual probes are visualized<br>Ilya Kuzovkin, Juan R. Vidal, Marcela Perrone-Bertlotti, Philippe Kahane, Sylvain Rheims, Jaan Aru, Jean-Philippe Lachaux, Raul Vicente<b>Identifying task-relevant spectral signatures of perceptual categorization in the human cortex</b>https://www.biorxiv.org/content/10.1101/483487v1
Supplementary figures: averaged, per-subject and per-area time-frequency importance maps<br>... more Supplementary figures: averaged, per-subject and per-area time-frequency importance maps<br>Ilya Kuzovkin, Juan R. Vidal, Marcela Perrone-Bertlotti, Philippe Kahane, Sylvain Rheims, Jaan Aru, Jean-Philippe Lachaux, Raul Vicente<b>Identifying task-relevant spectral signatures of perceptual categorization in the human cortex</b>https://www.biorxiv.org/content/10.1101/483487v1
Modelleerimine on inimkonna põline viis keerulistest nähtustest arusaamiseks. Planeetide liikumis... more Modelleerimine on inimkonna põline viis keerulistest nähtustest arusaamiseks. Planeetide liikumise mudel, gravitatsiooni mudel ja osakestefüüsika standardmudel on näited selle lähenemise edukusest. Neuroteaduses on olemas kaks viisi mudelite loomiseks: traditsiooniline hüpoteesipõhine lähenemine, mille puhul kõigepealt mudel sõnastatakse ja alles siis valideeritakse andmete peal; ja uuem andmepõhine lähenemine, mis toetub masinõpele, et sõnastada mudeleid automaatselt. Hüpoteesipõhine viis annab täieliku mõistmise sellest, kuidas mudel töötab, aga nõuab aega, kuna iga hüpotees peab olema sõnastatud ja valideeritud käsitsi. Andmepõhine lähenemine toetub ainult andmetele ja arvutuslikele ressurssidele mudelite otsimisel, aga ei seleta kuidas täpselt mudel jõuab oma tulemusteni. Me väidame, et neuroandmestike suur hulk ja nende mahu kiire kasv nõuab andmepõhise lähenemise laiemat kasutuselevõttu neuroteaduses, nihkes uurija rolli mudelite tööprintsiipide tõlgendamisele. Doktoritöö koos...
Reinforcement Learning algorithms typically require millions of environment interactions to learn... more Reinforcement Learning algorithms typically require millions of environment interactions to learn successful policies in sparse reward settings. Hindsight Experience Replay (HER) was introduced as a technique to increase sample efficiency through re-imagining unsuccessful trajectories as successful ones by replacing the originally intended goals. However, this method is not applicable to visual domains where the goal configuration is unknown and must be inferred from observation. In this work, we show how unsuccessful visual trajectories can be hallucinated to be successful using a generative model trained on relatively few snapshots of the goal. As far as we are aware, this is the first work that does so with the agent policy conditioned solely on its state. We then apply this model to training reinforcement learning agents in discrete and continuous settings. We show results on a navigation and pick-and-place task in a 3D environment and on a simulated robotics application. Our me...
Uploads
Papers by Ilya Kuzovkin