photo

Yihao Wan


Last seen: quasi 2 anni fa Attivo dal 2021

Followers: 0   Following: 0

Statistica

MATLAB Answers

10 Domande
1 Risposta

RANK
35.082
of 300.851

REPUTAZIONE
1

CONTRIBUTI
10 Domande
1 Risposta

ACCETTAZIONE DELLE RISPOSTE
90.0%

VOTI RICEVUTI
1

RANK
 of 21.094

REPUTAZIONE
N/A

VALUTAZIONE MEDIA
0.00

CONTRIBUTI
0 File

DOWNLOAD
0

ALL TIME DOWNLOAD
0

RANK

of 171.294

CONTRIBUTI
0 Problemi
0 Soluzioni

PUNTEGGIO
0

NUMERO DI BADGE
0

CONTRIBUTI
0 Post

CONTRIBUTI
0 Pubblico Canali

VALUTAZIONE MEDIA

CONTRIBUTI
0 Punti principali

NUMERO MEDIO DI LIKE

  • Thankful Level 3

Visualizza badge

Feeds

Visto da

Domanda


Scaling layer usage for action output
Hello, I am using the tanhlayer as the output activation function for the action network while my action space is [0,10]. In thi...

oltre 2 anni fa | 0 risposte | 0

0

risposte

Domanda


How to set multiple stopping or saving criteria for RL agent?
Hello, I wondered if it is possible to set multiple stopping or saving criteria for RL agent? E.g. Save the agent for average ep...

oltre 2 anni fa | 2 risposte | 0

2

risposte

Domanda


How to run the simulink model when implementing custom RL training?
Hello, I am developing a custom training of RL DQN agent based on the link, however, how should I adapt it to the simulink envir...

oltre 2 anni fa | 1 risposta | 0

1

risposta

Domanda


How to implement the custom training with DQN agent in Simulink environment?
Hello, I would like to implement the custom RL DQN agent training in Simulink environment, I have tried to look into the referen...

oltre 2 anni fa | 1 risposta | 0

1

risposta

Domanda


Transient value problem of the variable in reward function of reinforcement learning
Hello, I encounted a problem when designing the reward function. In the simulink environment, I want to incorporate some variabl...

quasi 5 anni fa | 1 risposta | 1

1

risposta

Domanda


Elements problem due to the deep learning toolbox 'Predict'
I use the deep learning library predict block for RL agent in simulink, while the error indicates Invalid setting for input port...

quasi 5 anni fa | 1 risposta | 0

1

risposta

Domanda


How to extract neural network of reinforcement learning agent?
Is there a way to get the neural network of trained reinforcement learning agent?

quasi 5 anni fa | 1 risposta | 0

1

risposta

Domanda


C code generation for reinforcement learning agent in Simliunk 2019b
Hello, I want to implement the reinforcement learning agent in dSPACE, and current supported version is SIMULINK 2019b. I wonder...

quasi 5 anni fa | 1 risposta | 0

1

risposta

Risposto
code generation error: Build error: C++ compiler produced errors. See the Build Log for further details.
Thanks. It is solved by copying the mkldnn_config.h to the directory.

quasi 5 anni fa | 0

| accettato

Domanda


code generation error: Build error: C++ compiler produced errors. See the Build Log for further details.
I am using the C++ code generation of deep nueral network. And it poped out the folloing error: cl /TP -c -nologo -GS -W4 -DW...

quasi 5 anni fa | 1 risposta | 0

1

risposta

Domanda


How to implement reinforcement learning using code generation
I want to implement the reinforcement learning block in dSPACE using code generation, while the simulink will pop out the error ...

quasi 5 anni fa | 1 risposta | 0

1

risposta