
Emmanouil Tzorakoleftherakis
Statistics
RANK
134
of 257.957
REPUTATION
764
CONTRIBUTIONS
0 Questions
267 Answers
ANSWER ACCEPTANCE
0.00%
VOTES RECEIVED
76
RANK
12.929 of 17.780
REPUTATION
17
AVERAGE RATING
0.00
CONTRIBUTIONS
1 File
DOWNLOADS
8
ALL TIME DOWNLOADS
140
RANK
of 110.186
CONTRIBUTIONS
0 Problems
0 Solutions
SCORE
0
NUMBER OF BADGES
0
CONTRIBUTIONS
0 Posts
CONTRIBUTIONS
0 Public Channels
AVERAGE RATING
CONTRIBUTIONS
0 Highlights
AVERAGE NO. OF LIKES
Content Feed
Training Quadrotor using PPO agent
Hello, There are multiple things not set up properly, including: 1) The isdone flag seems to be 1 all the time leading to epis...
24 giorni ago | 0
How to train RL-DQN agent with varying environment?
What you are describing is actually pretty standard process to create robust policies. To change the driving profiles, you can u...
11 mesi ago | 2
| accepted
Editing the Q-table before Training in Basic Grid World?
Hello, Please take a look at this link that mentions how you can initialize the table.
11 mesi ago | 0
| accepted
Could I learn from past data INCLUDING actions? Could I make vector with actions to be used in a certain order?
Hello, If the historical observations do not depend on the actions taken, (think of stock values, or historical power demand), ...
11 mesi ago | 1
| accepted
update reinforcement policy.m weights
Hello, When you want to perform inference on an RL policy, there is no need to consider rewards. The trained policy already kno...
11 mesi ago | 0
| accepted
I believe the RL environment template creator has an error in the reset function but I'm not sure
Hello, You are correct the order is wrong. That being said, the order of states depends on your dynamics and how you set up the...
11 mesi ago | 0
| accepted
What exactly is Episode Q0? What information is it giving?
Q0 is calculated by performing inference on the critic at the beginning of each episode. Effectively, it is a metric that tells ...
11 mesi ago | 1
| accepted
Resume training of a DQN agent. How to avoid Epsilon from being reset to max value?
Hello, This is currently not possible, but it is a great enhancement idea. I have informed the developers about your request an...
11 mesi ago | 0
| accepted
Reinforcement learning with Simulink and Simscape
Even outside the thermal domain, you most likely need to start with a simulation model. RL does not need to build that model nec...
11 mesi ago | 0
RL training result very different from the result of 'sim'
Please see this post that explains why simulation results may differ during training and after training. If the simulation resu...
11 mesi ago | 0
| accepted
RL in dynamic environment
The following example seems relevant, please take a look: https://www.mathworks.com/help/robotics/ug/avoid-obstacles-using-rein...
12 mesi ago | 0
MPC Controller giving nice performance during designing but fails on testing
Hello, It sounds to me that the issue is with the linearized model. When you are exporting the controller from MPC Designer, yo...
12 mesi ago | 0
What is in a reinforcement learning saved agent .mat file
Why don't you load the file and check? When you saved the agen tin the .mat file, did you save anything else with it? Are you m...
12 mesi ago | 0
How to deal with a large number of state and action spaces?
Even if the NX3 inputs are scalars, I would reorganize them into an "image" and use imageInput layer for the first layer as oppo...
12 mesi ago | 0
Q learning algorithm in image processing using matlab.
Hello, Finding an example that exactly matches what you need to do may be challenging. If you are looking for the "deep learnin...
circa un anno ago | 0
| accepted
Need help with Model based RL
Hello, If you want to use the existing C code to train with Reinforcement Learning Toolbox, I would use the C caller block to b...
circa un anno ago | 1
| accepted
How to set the reinforcement learning block in Simulink to output 9 actions
Hello, the example you are referring to does not output 3 values for the pid gains. The PID gains are "integrated" into the neu...
circa un anno ago | 0
Where to update actions in environment?
Reinforcement Learning Toolbox agents expect a static action space, so fixed number of options at each time step. To create a dy...
circa un anno ago | 0
How to check the weight and bias which taked by getLearnableParameters?
Can you provide some more details? What does 'wrong answer' mean? How do you know the weights you are seeing are not correct? Ar...
circa un anno ago | 0
Gradient in RL DDPG Agent
If you put a break point right before 'gradient' is called in this example, you can step in and see the function implementation....
circa un anno ago | 0
| accepted
Soft Actor Critic deploy mean path only
Hello, Please take a look at this option here which was added in R2021a to allow exactly the behavior you mentioned. Hope this...
circa un anno ago | 0
| accepted
How to pretrain a stochastic actor network for PPO training?
Hello, Since you already have a dataset, you will have to use Deep Learning Toolbox to get your initial policy. Take a look at ...
circa un anno ago | 1
Failure in training of Reinforcement Learning Reinforcement Learning Onramp
Hello, We are aware and working to fix this issue. In the meantime, can you take a look at the following answere? https://www....
circa un anno ago | 0
DQN Agent with 512 discrete actions not learning
I would initially revisit the critic architecture for 2 reasons: 1) Network seems a little simple for a 3->512 mapping 2) This...
circa un anno ago | 0
How does the Q-Learning update the qTable by using the reinforcement learning toolbox?
Can you try critic.Options.L2RegularizationFactor=0; This parameter is nonzero by default and likely the reason for the discre...
circa un anno ago | 0
File size of saved reinforcement learning agents
Hello, Is this parameter set to true? If yes, then it makes sense that mat files are growing in size as the buffer is being pop...
circa un anno ago | 0
| accepted
Saving Trained RL Agent after Training
Setting the IsDone flag to 1 does not erase the trained agent - it actually makes sense that the sim was not showing anything be...
circa un anno ago | 0
| accepted
How to Train Multiple Reinforcement Learning Agents In Basic Grid World? (Multiple Agents)
Training multiple agents simultaneously is currently only supported in Simulink. The predefined Grid World environments in Reinf...
circa un anno ago | 0
| accepted
How to create a neural network for Multiple Agent with discrete and continuous action?
If you want to specify the neural network structures yourself, there is nothing specific you need to do - simply create two acto...
circa un anno ago | 0
| accepted