问题
Personal knowlegedment: I read from javacodegeeks: "... SimpleAsyncTaskExecutor is ok for toy projects but for anything larger than that it’s a bit risky since it does not limit concurrent threads and does not reuse threads. So to be safe, we will also add a task executor bean... " and from baeldung a very simple example how to add our own Task Executor. But I can find any guidance explaining what are the consequences and some worth cases to apply it.
Personal desire: I am working hard to provide a corporative architecture for our microservices logs be publish on Kafka topics. It seems reasonble the statement " risky caused by not limit concurrent threads and not reuse it" mainly for my case that is based on logs.
I am running the bellow code succesfully in local desktop but I am wondering if I am providing a custom Task Executor properly.
My question: does this configuration bellow taking in account I am already using kafkatempla (i.e. syncronized, singleton and thread safe by default at least for producing/sending messsage as far as understand it) really going in right direction to reuse threads and avoid spread accidentally threads creation while using SimpleAsyncTaskExecutor?
Producer config
@EnableAsync
@Configuration
public class KafkaProducerConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaProducerConfig.class);
@Value("${kafka.brokers}")
private String servers;
@Bean
public Executor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(2);
executor.setMaxPoolSize(2);
executor.setQueueCapacity(500);
executor.setThreadNamePrefix("KafkaMsgExecutor-");
executor.initialize();
return executor;
}
@Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return props;
}
}
Producer
@Service
public class Producer {
private static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
@Async
public void send(String topic, String message) {
ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(topic, message);
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
@Override
public void onSuccess(final SendResult<String, String> message) {
LOGGER.info("sent message= " + message + " with offset= " + message.getRecordMetadata().offset());
}
@Override
public void onFailure(final Throwable throwable) {
LOGGER.error("unable to send message= " + message, throwable);
}
});
}
}
for demo purposes:
@SpringBootApplication
public class KafkaDemoApplication implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(KafkaDemoApplication.class, args);
}
@Autowired
private Producer p;
@Override
public void run(String... strings) throws Exception {
p.send("test", " qualquer messagem demonstrativa");
}
}
回答1:
This is default implementation of SimpleAsyncTaskExecutor
protected void doExecute(Runnable task) {
Thread thread = (this.threadFactory != null ? this.threadFactory.newThread(task) : createThread(task));
thread.start();
}
New thread is created for every task, thread creation in Java is not cheap: (Reference)
Thread objects use a significant amount of memory, and in a large-scale application, allocating and deallocating many thread objects creates a significant memory management overhead.
=> Repeatedly execute task with this task executor will negatively affect application performance (moreover, this executor by default does not limit the number of concurrent tasks)
That's why you're advised to use a thread pool implementation, the thread creation overhead is still there but significantly reduced due to threads are reused instead of create-fire-forget.
When configure ThreadPoolTaskExecutor
, two notable parameters should be defined properly according to your application load:
private int maxPoolSize = Integer.MAX_VALUE
;This is maximum number of threads in the pool.
private int queueCapacity = Integer.MAX_VALUE;
This is maximum number of tasks queued. Default value may cause OutOfMemory exception when the queue is full.
Using default value (Integer.MAX_VALUE
) may lead to out of resource / crash in your server.
You can improve the thoughput by increasing number of maximum poolsize setMaxPoolSize()
, to reduce the warm up when loading increase, set core poolsize to higher value setCorePoolSize()
(any number of threads different between maxPoolSize - corePoolSize
will be initilized when load increase)
来源:https://stackoverflow.com/questions/60385961/what-are-the-drawnbacks-and-risks-replacing-default-simpleasynctaskexecutor-by-o