Kafka: unable to start Kafka - process can not access file 00000000000000000000.timeindex

僤鯓⒐⒋嵵緔 提交于 2019-11-29 05:53:12

问题


Kafka enthusiast, need little help here. I am unable to start kafka because the file \00000000000000000000.timeindex is being used by another process. Below are the logs:

[2017-08-09 22:49:22,811] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.FileSystemException: \installation\kafka_2.11-0.11.0.0\log\test-0\00000000000000000000.timeindex: The process cannot access the file because it is being used by another process.

        at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
        at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
        at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
        at java.nio.file.Files.deleteIfExists(Files.java:1165)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:311)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at kafka.log.Log.loadSegmentFiles(Log.scala:272)
        at kafka.log.Log.loadSegments(Log.scala:376)
        at kafka.log.Log.<init>(Log.scala:179)
        at kafka.log.Log$.apply(Log.scala:1580)
        at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2017-08-09 22:49:22,826] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)

回答1:


I had the same issue. The only way I could figure it out that was just delete the C:\tmp\kafka-logs directory. After that i was able to start up the kafka server.

You will lose your data and the offset will start from 0.




回答2:


java.nio.file.FileSystemException: \installation\kafka_2.11-0.11.0.0\log\test-0\00000000000000000000.timeindex: The process cannot access the file because it is being used by another process.

00000000000000000000.timeindex is being used by another process. So you can delete the process by using following command

$ ps aux | grep zookeeper
$ sudo kill -9 <PID> 

Here PID is the zookeeper's process ID.


The problem is not fixed yet. It is described here: https://issues.apache.org/jira/browse/KAFKA-1194

There are 2 ways for temporary solution given by ephemeral972:

  1. [Recommended] You need to clean up the broker ids in the zookeeper path /brokers/ids/[]. Use the zk-cli tool delete command to clean up the paths. Start your brokers and verify it registers with the coordinator.
  2. The other way of resolving this is to change your broker-id from kafka server config and restarting the broker. However, this would corrupt your partitions and data is not recommended



回答3:


I faced the same issue and restarting kafka and zook then windows didn't work for me. what works for me (Don't reproduce that in Production mode, I'm not sure it will works fine but it could be acceptable with a DEVELOPMENT kafka server.

on a dev kafka server: go to the concerned directory (for instance \installation\kafka_2.11-0.11.0.0\log\test-0) and delete all files other than :

00000000000000000000.index
00000000000000000000.log
00000000000000000000.timeindex
leader-epoch-checkpoint

Then restart kafka, it was ok for me, after restarting (zookeeper then kafka), kafka add a .snapshot file and everything was ok.




回答4:


All answers give you a same solution by remove data, not how to prevent the problem.

Which actually, you just need to stop Kafka and Zookeepter properly.

You just have run these two commands in order

kafka-server-stop.sh

zookeeper-server-stop.sh

Then next time when you start, you will see no problems.




回答5:


Followed the approached suggested by @SkyWalker

Follow the below steps:

  1. List item.Open zkCli and get everything inside broker. See the below screenshot.

  2. Go inside topics and press double tab. You will get all the topic listed here.

  3. Delete each topics then.




回答6:


I got this error too while running kafka on windows. You can avoid this error by changing the default config in sever.properties file.

Please follow these steps:

  1. Go to the config folder of kafka installation.
  2. Open the Server.properties file
  3. you will see the config

A comma separated list of directories under which to store log files:

log.dirs=/tmp/logs/kafka**

Change the value of log.dirs=/tmp/logs/kafka to some other value, for example:

log.dirs=/tmp/logs/kafka1
  1. Now start your kafka-server again.

This should solve the issue.




回答7:


I faced the same problem and this is how i resolved it.

Change the log.dirs path in server.properties log.dirs=C:\kafka\logs

Another solution which worked : delete all files from the below dir wherever configured kafkalogs\test-0




回答8:


I had similar issue on windows , partly because i had deleted couple of topics ( since i found no other way to just flush only the messages from those topics ). This is what worked for me.

Change the logs.dir in config/server.properties to new location
Change the dataDir in config/zookeeper.properties to new location
Restart zookeeper and kafka

The above obviously will work when you have no other topics other than what you deleted on the zookeeper/kafka to cater for , if there are other topics which you still want to retain configuration for , i believe the solution proposed by @Sumit Das might work. I had issues starting zkCli on my windows and i had only those topics which i deleted on my brokers , so i could safely do the above steps and get away with it.




回答9:


This seems to be a known issue that gets trigerred on Windows after 168 hours have elapsed since you last published the message. Apparently this issue is being tracked and worked on here: KAFKA-8145

There are 2 workarounds for this:

  1. As suggested by others here you can clean up your directory containing your log files (or take a back up and have log.dirs point to another directory). However by this way you will loose your data.
  2. Go to you server.properties file and make following changes to it. Note: This is temporary solution to allow your consumers to come up and consume any remaining data so that there is no data loss. After having got all the data you need you should revert to Step 1 to clean up your data folder once and for all.

Update below property to prescribed value

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=-1

Add this property at the end of your properties file.

log.cleaner.enable=false

Essentially what you are doing is that you are saying to the Kafka broker to not bother deleting old messages and that the age of all messages is now infinite i.e they will never be deleted. As you can see this is obviously not a desirable state and hence you should only do this in order for you to be able to consume whatever you need and then clean up your files / directory (Step 1). IMHO that the JIRA issue mentioned above is worked on soon and as per this comment looks like it may soon be resolved.




回答10:


I configure tmp path as below: (in file ./config/server.properties)

log.dirs=d:\tmp\kafka-logs

then I changed from backslash '\' to '/':

log.dirs=d:/tmp/kafka-logs

and create folder to solve the problem



来源:https://stackoverflow.com/questions/45599625/kafka-unable-to-start-kafka-process-can-not-access-file-00000000000000000000

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!