Study Log (2021.03)
2021-03-23
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- Step별 복수 설비 진행 가능한 케이스 개발
- train_FT10_ppo_node_only.py
2021-03-22
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- Step별 복수 설비 진행 가능한 케이스 개발
- train_FT10_ppo_node_only.py
2021-03-21
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- Step별 복수 설비 진행 가능한 케이스 개발
- train_FT10_ppo_node_only.py
2021-03-21
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- Step별 복수 설비 진행 가능한 케이스 개발
- train_FT10_ppo_node_only.py
2021-03-20
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- Plotly Gantt Chart 추가
- train_FT10_ppo_node_only.py
2021-03-18
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- Step별 복수 설비 진행 가능한 케이스 개발
- train_FT10_ppo_node_only.py
2021-03-17
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-16
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-15
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-12
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-11
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-10
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-09
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-08
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-07
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-06
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-05
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-04
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-03
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-02
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
2021-03-01
- S-K RL
- train_FT10_ppo_node_only.py
- do_simulate_on_aggregated_state()
- value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
- eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
- val_performance = validation(agent, path, mode=’node_mode’)
- SBJSSP_report_results.ipynb
- def get_swapping_ops(blocking_op, machine_dict)
- class blMachine(Machine)
- class blMachineManager(MachineManager)
- class blSimulator(Simulator)
- def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
- def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
- def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
- def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
- def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
- def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
- def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
- def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
- def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)
- train_FT10_ppo_node_only.py
Template
- Fundamental of Reinforcement Learning
- Chapter #.
- 모두를 위한 머신러닝/딥러닝 강의
- Lecture #.
- UCL Course on RL
- Lecture #.
- Reinforcement Learning
- Page #.
- 팡요랩
- 강화학습 1강 - 강화학습 introduction
- 강화학습 2강 - Markov Decision Process
- 강화학습 3강 - Planning by Dynamic Programming
- 강화학습 4강 - Model Free Prediction
- 강화학습 5강 - Model Free Control
- 강화학습 6강 - Value Function Approximation
- 강화학습 7강 - Policy Gradient
- 강화학습 8강 - Integrating Learning and Planning
- 강화학습 9강 - Exploration and Exploitation
- 강화학습 10강 - Classic Games
- Pattern Recognition & Machine Learning
- S-K RL
- multi_step_actor
Comments