If no one opposes this statement:
"If p divides k, then p does not divide k*2^n1. This means that N=k*2^n1 is prime with probability k/phi(k) * 1/ln(N) instead of probability 1/ln(N). Then I would declare 15k*2^n1 a viable search for the future, due to the evidence of first 1001 k listed on the k*2^n1 search. And also I have probed k<5000, n<5000 with precise results 15k produces the most primes! There are exceptions to the rule but so far this is the evidence for little k (odd). Please counter this if you can! :question: :idea: 
I'd like to add that 3 is a great candidate to test.
I had previously doubted testing k=3, but I'm hip. 
Yes, Shane, three is cool 8)
The sieving is up to 'p' equals 3.9 trillion. Unfortunately I had a power cut/outage and lost the rate at which my Athlon 1GHz was eliminating candidates. However, over about 160 hours it eliminated 184 candidates  on average one every 52 minutes. I have set off the sieving computers on another 1.5 trillion. I have had a word with Paul Jobling ( who wrote NewPGen, the sieve I am using ) and confirmed with him that I can increase sieving thoughput on my slow Athlon and two relatively quick AthlonXP's by about twenty per cent. This is achieved by putting the highest range of 'p' on the slowest computer and the other two lower ranges of 'p' on the quicker computers. As soon as the two XP's have finished I can stop the slow computer and perform the merging of the files. All blocks upto 'n' equals 420,000 have been assigned. Please email me ( paulunderwood@mindless.com ) to join in the search. I feel a prime find is imminent... ;) 
8) I would still like to discuss this cutoff time. :?
At the point where you are no longer pulling composites = to the testing time of a given range, then stop newgpen, start LLR or PRP, test that range and delete that range from newgpen, and resieve file. s = Number of sieved n, so far. t = Rate of sieved n, in seconds. r = Number of sieved n, of a given range so far.(minmax) (s*t)/r= cut off NewGPen The appropriate ranges are however the real question :question: These probably aren't the best cutoff ranges, but what is? / 3*2^4000111 Time : 2364.714 sec. r \ 3*2^5000061 Time : 3861.034 sec. When (s*t)/r=3861 sec. cut. 3*2^6000231 Time : 5595.876 sec. 3*2^7000031 Time : 11707.696 sec. 3*2^8000231 Time : 13288.771 sec. 3*2^9000001 Time : 18232.217 sec. 3*2^9999881 Time : 23783.626 sec. (min)*1.618033988749...^r ? :devil: :evil: :devil: :evil: :devil: :shock: 
I am inclined to agree more with Sander ( smh ) than you on the method of sieving. It seems better to keep candidates in the sieve for much longer than it first appears, so much so that it maybe better to organise a bigger sieving effort than the current one :devil: .
( Firstly let me make the assumption that candidates a uniformly random over 'n'.) Suppose we decrease the range to a quarter of the original; Here the time taken decreases only by 0.5 ( i.e the square root  as explained in NewPGen help on fixedk searches.). As a second example, if we decrease the original range to a sixteenth of the original it will only take a quarter of the time. In my last report about the sieving status of 'n' for 400,000 to 1,000,000 I said I had removed 184 candidates in 160 hours which is about 52:17 minutes. Had I only sieved 420,000 to 1,000,000 I would have decreased the time taken to sqrt(58/60)*160 which is about 157.5 hours but would have eliminated (58/60)*184 candidates i.e. about 178 candidates. In the extra time gained ( 160157.5 = 2.5 hours ) we could have tested with LLR an extra 3.8 candidates ( based on earlier given timings .) Summarising, by leaving the candidates in the seive we can eliminate 184 candidates, but by doing the lower 2/60 by LLR and leaving the rest in the sieve we would eliminate 181.8 candidates; 2.2 fewer :shock: On the bright side, I see sieving taking me another 46 weeks on my three Athlons. If anyone with experience of using NewPGen and has an AthlonXP ( or better ) or more to help me do some sieving please contact me ( paulunderwood@mindless.com). I am sieving 'p' in blocks that take an AthlonXP 1 week to complete  Saturday to Saturday. 
I assume random placement as well.
But now I strongly disagree, due to many trials over the past few days. There is a dual benefit, by testing the first range when it is ripe, rather than rotten, frees up cycles for the main file you cut from, which is now smaller and more edible as well. Any given range of n,(min/max) should not differ more than twice as long as the minimum time of that range. The main large file does not loose it's benefits, to become shorter as time allows after it has done it's job in a range so that differs not from Sander's idea. The large file is the way to start. I can't see how you justify wasting much more than 4728 seconds to pull a composite out of this first range? Why allow the sieve to spend up to 23,783 seconds to pull one out? Are you suggesting sieving until 23,783 second per n, for the entire file at once? :surprised:ops: / 3*2^4000111 Time : 2364.714 sec. min r :question: \ ~ 3*2^550001 Time: 4728 sec. " 9456 sec " 18912 sec 3*2^9999881 Time : 23783.626 sec. 
I am sorry for not explaining when I will stop sieving. I plot the timings I gave earlier over 'n'. Then I calculate the average time by estimating where half the area falls. Then I add a little for good measure! Therefore my cut off time will be somewhere in the region of 3:304:30 hours.
3*2^8000231 Time : 13288.771 sec. 
Check the rate of those n, being pulled between 400000550000.
What if it was [i]consistenly[/i] pulling them no faster than 4728 sec from here? 
Suppose we are eliminating a candidate from the range 400,000550,000 every 4728 seconds.
Method 1. We leave all the numbers in the sieve which means we remove one candidate from the whole range every 4728/4 seconds = 1182 seconds. Method 2. We remove the range 400,000550,000 from the sieve and continue sieving the rest. The time taken to sieve 0.75 of a candidate is sqrt(3/4)*1182 seconds = 1023 seconds. Assuming the average time for LLR on the range 400,000550,000 is 3800 seconds, with the extra 158 seconds saved in sieving we can test 158/3800 of a candidate i.e. 0.04 of a candidate. Overall in 1182 seconds we would have eliminated only 0.79 of a candidate. 
Just a quick line to let you know how the sieving is progressing. All 'n' greater than 430,000 are being sieved. There are 37,318 'n' in the sieve. The greatest divisor tested is 5.4 trillion. In about 170 hours my 1 GHz Athlon eliminated 130 candidates  about 78:28 minutes per elimination. The target for elimination on this computer is 42 candidates a week.

All candidate 'n' less than 445,000 have be assigned to our members for testing. There are 36,344 candidates left in the sieve which has reached a maximum divisor of 6.9 trillion. My 1 GHz Athlon eliminated 69 candidates in about 150 hours i.e. one every 2:10 hours. Remember the target is about one in 4:00 hours. My three Athlons are now sieving another 1.5 trillion range of divisors.

All times are UTC. The time now is 14:51. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.