[00:02:19] <Godslastering> if i set up logging in my mongodb.conf file, can i also tell it to log to screen? it's telling me that all output is going to my log file, i want output going to both
[00:07:22] <jordanorelli> Godslastering: if it's already going to a file you can just use `tail -f` to watch it.
[00:08:12] <Godslastering> jordanorelli: that's what i'm doing right now, but if i dont specify a log path, it just logs to screen. can i do both? or should i just stick to tailing the logfile?
[00:09:10] <jordanorelli> well it depends how you run it. the other option would be to not specify the log file, and then use the `tee` command to copy it into a file
[00:09:27] <jordanorelli> but i can't think of any benefits to doing such a thing.
[00:09:47] <jordanorelli> i personally just tail the log file.
[00:10:04] <Godslastering> jordanorelli: well on my local testing machine i run mongod and don't fork it to a daemon, and just watch the output in another terminal window
[02:59:56] <nofxx> Forgot an unique index, will mongo erase the dups if I ensure it now?
[03:31:13] <nofxx> Answer: It fails nicely. (what happens when you add an index with unique: true on dups)
[09:27:20] <idletask> I run mongodb with --noprealloc but it still tries and preallocates, full command line is /usr/bin/mongod --bind_ip 127.0.0.1 --nounixsocket --journal --dbpath /data/mongodb --noauth --noscripting --noprealloc
[09:27:31] <idletask> Why does it still try to prealloc if I tell it not to?
[09:30:09] <idletask> It tries and preallocates the _journal_, not the data file :(
[09:30:26] <idletask> I see no option for not preallocating the journal, does there exist one which is "hidden"?
[09:46:09] <circlicious> i am importing via rockmongo, this is the error i get, what does it mean
[09:46:11] <circlicious> insert too large: 17614416, max: 16000000
[09:46:28] <circlicious> i dont think any of my document is 16mb big, thats a hugesize to achieve.
[09:50:07] <circlicious> the import file size is 16.8mb, how can any document in any collection be 16mb then?
[11:50:27] <Gargoyle> circlicious: Is your web server limiting the upload size to 16MB ?
[12:52:37] <Godslastering> are regex queries supposed to be extremely slow, even on an indexed field? i'im getting about 238ms for a simple regex using {field:{$not:REGEX}}
[12:58:44] <Gargoyle> Godslastering: Does your REGEX have a start anchor "^" ?
[13:07:14] <Godslastering> Gargoyle: no but it has an end anchor
[13:07:36] <Gargoyle> Godslastering: Then it won't use an index.
[13:08:12] <Godslastering> Gargoyle: really ...? even if it has an anchor of some sort? that's odd. any way to speed up my regex then?
[13:08:54] <Gargoyle> Godslastering: Not really odd, since the index will be stored from left to right. IIRC.
[13:09:21] <Gargoyle> Not sure how to speed that up, I think you have to essentially to a full scan?
[13:09:31] <Gargoyle> does explain tell you anything useful?
[13:28:59] <Godslastering> Gargoyle: (sorry for delay, i've been afk back and forth) explain tells me i'm just using a basiccursor and my nreturned is around 250k
[13:30:13] <Gargoyle> What is it you are searching on?
[13:48:07] <Godslastering> Gargoyle: what do you mean what am i searching on?
[13:49:04] <Gargoyle> What's the field holding? might be something you can do to make the but you are looking for indexed?
[13:50:13] <Godslastering> Gargoyle: it's holding a hostmask (an irc one here like freenode/staff/example) and i'm doing the regex \D[^/]+$ matching stuff like freenode/staff/example but not web/ip.192.168.0.1
[13:50:30] <Godslastering> Gargoyle: it has an end anchor but not beginning, so i'm not sure how to make this work any faster
[13:51:27] <Gargoyle> Godslastering: Can you split it at the last "/" and store that in another field?
[13:51:56] <Godslastering> Gargoyle: that's kind of what i'm doing with the query. i'm checking all hostmasks which CAN be split, and then handling those separately
[14:05:28] <manveru> Godslastering: reverse the string?
[14:05:47] <Godslastering> manveru: when i store it?
[15:04:59] <damonhouk> Can i ask questions here or is there another channel for that?
[15:08:24] <damonhouk> OK.. mongoexport --jasonArray igves me _id with key $oid where mongophp query results give me _id key $id. Can i get mongoexport to give me the same _id format? Does anybody know the rationale beind this? It seems inconvinient.