@
Hurriance 非常认同你的说法,公司按月发薪资,不是买一个月时间,是为了解决问题
但现在的公司氛围是,发一个月工资,就要做一个月的事,每个月得有产出,至于产出是什么,似乎公司不关心
产品做好了,没问题没 Bug 了,还得想事做(主要是没事做底下兄弟们日志也写不出)
终于看到希望了,在同一个服务器上,部署一个最小 flask+celery 服务,通过 docker-compose 跑起来,可以同步了
那就是服务器没问题,redis 没问题,就是原来程序哪里写的不对,再找找:
[2023-09-01 11:08:45,531: INFO/MainProcess] Task app.controller.index.add_together[d4885e83-f346-46b9-98c2-a9f981d7d1de] received
[2023-09-01 11:08:45,533: INFO/MainProcess] Task app.controller.index.add_together[0abc8808-5603-4c61-87de-f6bcd2747d53] received
[2023-09-01 11:08:45,535: INFO/MainProcess] Task app.controller.index.add_together[e1211bbc-8a76-4d8c-94d6-e3904cc50bdc] received
[2023-09-01 11:08:45,538: INFO/MainProcess] Task app.controller.index.add_together[3a099971-abc5-4c2c-b784-1a2aaba86a24] received
[2023-09-01 11:08:45,539: INFO/MainProcess] Task app.controller.index.add_together[f1a6604d-2757-4742-b4b5-33c4b92bbbb8] received
[2023-09-01 11:08:45,541: INFO/MainProcess] Task app.controller.index.add_together[d380858f-3e65-4569-bcea-54ea8db5e6cf] received
[2023-09-01 11:08:45,542: INFO/MainProcess] Task app.controller.index.add_together[740fbfed-7074-49f1-8680-6ddc48bfc2da] received
[2023-09-01 11:08:45,544: INFO/MainProcess] Task app.controller.index.add_together[78b6ee5f-15a0-409b-b41f-709b0fdcb818] received
[2023-09-01 11:08:45,545: INFO/MainProcess] Task app.controller.index.add_together[a482a9d2-1ffd-47df-b421-0bfcd1b386e1] received
[2023-09-01 11:08:45,546: INFO/MainProcess] Task app.controller.index.add_together[7baa35a0-d695-4010-8120-051d5eea9af7] received
[2023-09-01 11:08:46,535: INFO/ForkPoolWorker-7] Task app.controller.index.add_together[d4885e83-f346-46b9-98c2-a9f981d7d1de] succeeded in 1.0014203377068043s: 231
[2023-09-01 11:08:46,535: INFO/ForkPoolWorker-8] Task app.controller.index.add_together[0abc8808-5603-4c61-87de-f6bcd2747d53] succeeded in 1.001225769519806s: 647
[2023-09-01 11:08:46,537: INFO/ForkPoolWorker-1] Task app.controller.index.add_together[e1211bbc-8a76-4d8c-94d6-e3904cc50bdc] succeeded in 1.001103661954403s: 308
[2023-09-01 11:08:46,540: INFO/ForkPoolWorker-2] Task app.controller.index.add_together[3a099971-abc5-4c2c-b784-1a2aaba86a24] succeeded in 1.0009450502693653s: 735
[2023-09-01 11:08:46,542: INFO/ForkPoolWorker-3] Task app.controller.index.add_together[f1a6604d-2757-4742-b4b5-33c4b92bbbb8] succeeded in 1.0019154399633408s: 554
[2023-09-01 11:08:46,544: INFO/ForkPoolWorker-5] Task app.controller.index.add_together[740fbfed-7074-49f1-8680-6ddc48bfc2da] succeeded in 1.000898975878954s: 455
[2023-09-01 11:08:46,545: INFO/ForkPoolWorker-4] Task app.controller.index.add_together[d380858f-3e65-4569-bcea-54ea8db5e6cf] succeeded in 1.0016995184123516s: 771
[2023-09-01 11:08:46,546: INFO/ForkPoolWorker-6] Task app.controller.index.add_together[78b6ee5f-15a0-409b-b41f-709b0fdcb818] succeeded in 1.0007124096155167s: 281
[2023-09-01 11:08:47,537: INFO/ForkPoolWorker-8] Task app.controller.index.add_together[7baa35a0-d695-4010-8120-051d5eea9af7] succeeded in 1.00179473310709s: 788
[2023-09-01 11:08:47,538: INFO/ForkPoolWorker-7] Task app.controller.index.add_together[a482a9d2-1ffd-47df-b421-0bfcd1b386e1] succeeded in 1.0018408931791782s: 729
一个生产者发任务,对应两个消费者(宿主服务+docker 服务)了,任务消息能正常处理,就是不能异步
生产者发任务很正常:
put_content_to_obs.delay(new_name, local_name)
在生产者端也没要求返回结果,只发送
这几天,被这事搞得有点晕了,我本地,以及本地 docker 都试过,可以异步,到服务器就不行
今天试了,在服务器中多运行一个 docker 容器,来跑 celery
也就是在服务器上宿主运行一个 celery worker ,docker 中运行一个 worker, 相当于有两个消息费了
但还是同步,发起了 10 个任务,宿主执行 4 个,docker 中执行了 6 个,有分配,但还是同步,总的执行时间没变
docker 的执行命令分别试了两种,但一样的效果:
Dockerfile1(使用默认的 prefork ,8 个并发):
...
CMD ["celery", "-A", "docker_celery", "worker", "--loglevel", "INFO", "--logfile=logs/celery_docker.log"]
Dockerfile2(使用 eventlet ,5 并发):
...
CMD ["celery", "-A", "docker_celery", "worker", "--pool=eventlet", "--concurrency=5", "--loglevel", "INFO", "--logfile=logs/celery_docker.log"]
一样的结果,同步,总执行时间不变,郁闷了...
手动指定了 3 个并发
唯一改变的是 workerid:celery -A make_celery worker --concurrency=3 --loglevel INFO
但还是同步的:
[2023-08-29 18:19:00,670: INFO/MainProcess] Task pss.api.offce.put_content_to_obs[02ac662b-e0bd-4cc3-b659-6345a471505a] received
[2023-08-29 18:19:00,756: WARNING/ForkPoolWorker-1] requestId:
[2023-08-29 18:19:00,756: WARNING/ForkPoolWorker-1] 0000018A40CDC0F9540ADCD7126FE0E9
[2023-08-29 18:19:00,757: WARNING/ForkPoolWorker-1] [2023-08-29 18:19:00,757] WARNING in offce: obs_upload_file:OK
[2023-08-29 18:19:00,757: WARNING/ForkPoolWorker-1] obs_upload_file:OK
[2023-08-29 18:19:00,757: WARNING/ForkPoolWorker-1] test_8.png
[2023-08-29 18:19:00,757: INFO/ForkPoolWorker-1] Task pss.api.offce.put_content_to_obs[02ac662b-e0bd-4cc3-b659-6345a471505a] succeeded in 0.08660224080085754s: True
[2023-08-29 18:19:02,301: INFO/MainProcess] Task pss.api.offce.put_content_to_obs[19d3c1aa-20be-4dcb-a819-360191532325] received
[2023-08-29 18:19:02,400: WARNING/ForkPoolWorker-1] requestId:
[2023-08-29 18:19:02,400: WARNING/ForkPoolWorker-1] 0000018A40CDC7595A03C83BB2923AA0
[2023-08-29 18:19:02,401: WARNING/ForkPoolWorker-1] [2023-08-29 18:19:02,401] WARNING in offce: obs_upload_file:OK
[2023-08-29 18:19:02,401: WARNING/ForkPoolWorker-1] obs_upload_file:OK
[2023-08-29 18:19:02,401: WARNING/ForkPoolWorker-1] test_9.png
[2023-08-29 18:19:02,402: INFO/ForkPoolWorker-1] Task pss.api.offce.put_content_to_obs[19d3c1aa-20be-4dcb-a819-360191532325] succeeded in 0.09988882020115852s: True
@
celerysoft 8 核 32g 的服务器,按默认启动 concurrency 值是 8 ,上面有运行摘要截图,这个试过了,手动指定也不行
redis 确实是有值,说明是联通的:
127.0.0.1:6379[6]> keys *
1) "celery-task-meta-a144f43b-93eb-4047-bc01-6d0fdfe9b8f6"
2) "celery-task-meta-865395d9-2226-4969-a269-a93d56ee3c4c"
3) "celery-task-meta-2c44dafc-93e4-4792-8a40-7f747bbd063b"
4) "celery-task-meta-0203b744-504b-414f-adda-41b45fe2aff9"
5) "celery-task-meta-16d37b55-b645-4e05-b58b-55b87fbf4e37"
6) "celery-task-meta-1e2fc20a-a31d-41a3-9003-5c7ffef30e42"
7) "celery-task-meta-a819a02b-7c15-475d-907a-7ab5ed5221cd"
8) "celery-task-meta-c2779805-d922-4423-b2bd-976317e5486d"
9) "celery-task-meta-7a4868f2-305f-4f6b-992c-6ea0791f3427"
10) "celery-task-meta-ff756f38-02c7-4e1f-8b20-39db4722fe83"
11) "celery-task-meta-0e38860b-dd44-47c2-9e40-4a1f4a7c4bb4"
12) "celery-task-meta-3187c555-d3a3-46b1-bf13-3bc38bc79fbd"
13) "celery-task-meta-873c3f38-98b4-47cc-98e8-6f65a58c3269"
14) "_kombu.binding.celery"
15) "_kombu.binding.celery.pidbox"
16) "celery-task-meta-bca09af8-14f4-4d00-84d1-baae7d233070"
17) "celery-task-meta-4f2c9e67-86a8-410f-bbe4-1a408981fd1a"
18) "celery-task-meta-cc93cd0f-f931-4a8c-a24e-795863531953"
19) "celery-task-meta-53d64e39-c872-46d7-a392-57e8617b8751"
20) "celery-task-meta-30efb54a-9f95-46e0-bd49-4190d5684f4c"
21) "celery-task-meta-ca6a5f83-3cab-4111-92c8-f154c2e03766"
22) "celery-task-meta-02a741d2-7426-4339-ad57-a5eea00c72e6"
23) "_kombu.binding.celeryev"
24) "celery-task-meta-94218c29-08b7-4982-ac15-2bc349767fa6"
25) "celery-task-meta-2a9fd0de-2f14-4dbe-a26e-56d6a22c8466"
26) "celery-task-meta-2c9da801-8383-4829-8db0-a0cf5fb8030b"
27) "celery-task-meta-d3d0c01d-359d-45d2-809c-9cbc5072b73d"
28) "celery-task-meta-71610058-15ea-4d5c-b282-226448300228"
29) "celery-task-meta-ee4efe45-43c3-44e6-af0e-df843cb57ea6"
30) "celery-task-meta-6ea9d50a-6b6e-4e28-a8cb-837c6517da54"
@
kkk9 queues 和 seetings 这两怎么排查
服务器和本地运行 celery 的命令一样:
celery -A make_celery worker --loglevel INFO --logfile=/logs/celery.log
都能正常运行,本地由于是异步的,很快执行完毕
服务器上,一真无法异步,worker 从默认的 prefork ,调到 eventlet 和 gevent ,都不行
有没有人遇到这种情况
@
shanghai1998 问题是在厦门这二线城市,外包公司无法生存,单量少,定制多,开发人员成本高
见过不少做软件产品型的公司,慢慢变成定制公司(客户现在个性化越来越多)
然后员工挣钱,业务挣钱,就是公司亏钱,都倒了差不多了
@
Features 不是线上接单,谈单接单都在线下,肯定是先收钱再干活
主要是线上开发,怎么安排协作得更好
@
tomczhen 关于你说的 计工时模式,我有想过做个工单系统
把项目所有事,拆解成一个个工单,按工单时长结算,某个工单返工不算钱,新增工单算钱
只是这个工单时长不好量化
@
opengps 变动是难免的,再大公司设计出来的东西,都会有不足和调整
但有些人很计较,每次变动都要加钱,这个就麻烦
所以如何定个度,让大家都舒服,比较关键
之前有在大学中拉人开发,大学生,又勤快,又好学,可好用
但后面一毕业,又找不到人了