PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 9th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:24:56] <dstorrs> in the Perl driver, is 'ensure_index' a no-op if the index already exists? how would I find out?
[01:58:35] <dstorrs> from the Perl driver, what's the correct syntax for the 'scope' param on a mapreduce?
[01:58:50] <dstorrs> should the value be a string, a hashref, or what?
[02:19:34] <Max-P> Hi, just a quick advice request: I have to store daily data for a user, which one is better: having a collection for each actions independant or to store all of that in a big "day" collection?
[02:44:23] <sirpengi> Max-P: probably better for one collection
[02:44:49] <sirpengi> what do you mean by "each actions independant"?
[05:50:21] <Max-P> sirpengi: I went with one collection, sorry I went back to work ;) "each actions independant" meant that a user can add multiple activities to their calendar, but they can edit them individually, move them from a day to another, reorder them. I still considered it as a whole day, as managing them seperately would have been too much trouble. Thanks anyway, it confirms that I went the right way ^^
[05:58:03] <jgornick> hey guys, would it be possible to find records where the date lands on a tuesday?
[06:01:17] <henrykim> I would like to ask the situation I got now. I am under testing write operation of blog articles. we have now about 1 billion blog articles now.
[06:01:59] <henrykim> yes, I started to insert(write) articles one by one. shard is url which has about 70 bytes length.
[06:02:21] <henrykim> I am operating 3 shards and 5 routers.
[06:03:00] <henrykim> at the beginning, the performance is over 3000 TPS. but, 1 or 2 hours later, the performance is decreased the half of the first of it.
[06:03:31] <henrykim> and then finally, the average TPS of writes is about 400 ~ 500 TPS.
[06:04:03] <henrykim> yes, I thought the reason of the performance is caused by random writes.
[06:04:38] <henrykim> I cannot assume orders of writes. It's happened randomly.
[06:05:52] <henrykim> Internally, It's because of paging. to write an article 'a' url, mongo need to load 1 chunk. to write an article 'b' url, mongo need to load 10 chunks .. and then. and then...
[06:07:03] <henrykim> so, my asking is how I can solve this situation? you guys, any ideas?
[06:23:14] <Max-P> henrykim: I'm new here and don't know much about Mongo, but you might need to change the way your stuff is linked so I don't need to do that much read/write...
[06:23:53] <henrykim> Max-P: sorry, what do you mean?
[06:24:48] <Max-P> henrykim: sorry it's 2:30 AM here, I will try again: why do you need to load that much documents when inserting your article? (if I understood your problem)
[06:27:39] <henrykim> Max-P: different time zone ;) here I am living is quite shiny day. anyways, I am testing mongdb to use for our business.
[06:27:59] <henrykim> blog collection we have is quite busy and large.
[06:28:33] <henrykim> everyday I got 3,000,000 upserts and 7,000,000 selects from users.
[06:29:00] <henrykim> heap, I need to keep all datas which has url as unique key.
[06:30:16] <henrykim> problem I already described is write operations are happened randomly. random.random.
[06:32:13] <Max-P> If you properly indexed all of this, then I'm sorry I can't help. I thought you had some linking between your blog thing that slowed down the process, so I thought you might have had to change the way you index and traverse the collection, but I'm no use there, sorry
[08:12:55] <stefancrs> morning
[08:13:55] <stefancrs> I basically want to store "heartbeats" from different sources in a collection. My first naive idea was to just store one document per source name and update it's timestamp, but that wouldn't give me any logging
[08:14:30] <stefancrs> so my next idea was to just insert new document for each heartbeat received (name + timestamp per document, that's all I need to be able to retrieve)
[08:15:23] <stefancrs> at the same time, I want to keep it "name" agnostic. so how would I with a simple query retrieve the latest heartbeats from every source?
[08:16:17] <stefancrs> (if there are sources "s1", "s2" and "s3" for example. the send heartbeats whenever they "feel" like it, but I want to get all the latest ones...)
[08:16:27] <stefancrs> ideas? :)
[08:17:50] <dstorrs> stefancrs: some, but I don't quite follow
[08:18:04] <dstorrs> what would a real source name be?
[08:18:14] <stefancrs> dstorrs: s1, s2 and s3 are computers in our system
[08:18:24] <stefancrs> dstorrs: the names could be for instance "file server 1"
[08:18:33] <stefancrs> or whatever
[08:18:48] <dstorrs> aah, ok. so the point is to get a historical log of "yes, I'm alive"
[08:18:51] <stefancrs> dstorrs: and they'll all tell the main backend every now and then that "hey, I'm alive"
[08:18:56] <stefancrs> exactly
[08:19:09] <stefancrs> and I want to be able to retrieve all sources _latest_ timestamp
[08:19:23] <stefancrs> maybe I should just use two collections?
[08:19:30] <stefancrs> one with the log
[08:19:34] <stefancrs> and one with the latest
[08:20:29] <dstorrs> well, first off, this might be useful : http://api.mongodb.org/wiki/current/Optimizing%20Object%20IDs.html#OptimizingObjectIDs-Extractinsertiontimesfromidratherthanhavingaseparatetimestampfield.
[08:21:18] <dstorrs> second, you could use a capped collection. that guarantees that insertion order is preserved, so you can trivially retrieve most recent using findOne()
[08:22:16] <dstorrs> so, but the two together and you've got a capped collection of docs like this: { _id : ObjectID(...), name : 's1' }
[08:22:24] <dstorrs> *put the two
[08:22:51] <dstorrs> it also keeps you from having to cycle out old records
[08:23:06] <dstorrs> the only hazard is you need to preallocate the space, and it can't be sharded
[08:23:53] <dstorrs> stefancrs: make sense?
[08:25:41] <stefancrs> I honestly prefer to use a timestamp field when I have the choice
[08:26:05] <stefancrs> but that's not really part of the issue at hand :)
[08:26:40] <kollapse> Hi. Is there any way I can get the type of an object ?
[08:27:00] <stefancrs> kollapse: type of what kind of object? like if it's a string, int, array or object?
[08:27:03] <kollapse> For example something like db.customers.find( { name : 'some_name' } ).type() ??
[08:27:05] <kollapse> Yes
[08:27:21] <dstorrs> kollapse: 'typeof' operator in shell
[08:27:41] <dstorrs> or in a function that you eval / map-reduce, etc
[08:27:59] <kollapse> Hmm, any way to do this with the PHP driver ?
[08:29:22] <dstorrs> use the appropriate driver command to say "run map-reduce / run function / etc"
[08:30:16] <stefancrs> kollapse: uhm, you want to check type of data after you've retreived it?
[08:30:31] <kollapse> Basically I have a collection of objects that have a field called 'field1'. 'field1' can be of different types. I want to retrieve the most predominant type.
[08:30:43] <stefancrs> is it REALLY called field1?
[08:31:08] <kollapse> For example if 10 objects have `field1` as strings and only 5 as integer, I want to know that string is the most predominant type.
[08:31:25] <stefancrs> that'd would have to be done in a map/reduce I guess
[08:31:46] <stefancrs> or by fetching the entire dataset and doing the evaluation in php
[08:32:02] <stefancrs> if it's never a lot of data, and you want to be able to retrieve it whenever, I'd go for the latter
[08:32:26] <kollapse> It's possible there will be a LOT of data, so a native method is prefered.
[08:32:31] <kollapse> Mapreduce my only choice it seems ?
[08:34:09] <stefancrs> I'd think so yes, especially if you want to be able to deal with a lot of data
[08:34:55] <stefancrs> dstorrs: I think going with two collections make the most sense for my problem :)
[08:35:52] <dstorrs> stefancrs: ok
[08:36:19] <stefancrs> dstorrs: the "common" scenario is that I want to retrieve the "latest timestamps document"
[08:36:33] <stefancrs> dstorrs: the logging is more because it's the right thing to do
[08:36:40] <stefancrs> dstorrs: the logs won't be displayed in any UI
[08:36:48] <stefancrs> dstorrs: but the latest heartbeats will be
[08:38:48] <dstorrs> stefancrs: if you're going to track the history, don't forget to purge it
[08:38:58] <dstorrs> that's why I was suggesting the capped collection
[08:39:07] <stefancrs> dstorrs: this is for a server that literally will be used for about five days
[08:39:53] <dstorrs> two thoughts, fwiw. first, those sorts of estimates rarely end up working out
[08:40:13] <dstorrs> (i.e., the server ends up living on longer)
[08:40:26] <dstorrs> second, if it will be so short, why do this at all?
[08:41:45] <stefancrs> dstorrs: because the client is iffy?
[08:41:53] <stefancrs> dstorrs: this is for an event at times square
[08:42:15] <dstorrs> no, I meant, why do the logging?
[08:42:40] <stefancrs> dstorrs: good to be able to see if there were any issues while we were asleep, for instance
[08:43:18] <dstorrs> makes sense, I guess.
[08:43:28] <stefancrs> some sense at least
[08:43:37] <stefancrs> and it won't take up a lot of my time to implement
[08:43:41] <stefancrs> so I might just as well
[08:44:13] <stefancrs> the client will be calmer when I tell them "and we log the up time of all the nodes in the system, as long as the main node is alive. if it isn't, we'll know that too."
[08:45:13] <dstorrs> makes sense
[08:45:30] <stefancrs> marketing is an... interesting field.
[08:45:39] <dstorrs> heh
[08:45:54] <stefancrs> it pays good money though, plus I'll get to go to NYC and work there
[08:46:00] <stefancrs> win for me I'd say
[08:46:46] <dstorrs> indeed :>
[08:46:56] <dstorrs> while you're there, visit Max Brenners
[08:47:19] <dstorrs> southeast corner of Union Square, opposite the Regal Cinema
[08:47:34] <dstorrs> it's like WIlly Wonka opened a restaurant serving nothing but chocolate.
[08:48:12] <stefancrs> It'll have to be on my "cheat day" then
[08:48:16] <dstorrs> hot chocolate made by melting chocolate bricks and adding cream, chocolate pizza, chocolate fondue, etc
[08:48:29] <stefancrs> I, for realz, don't eat such things six days of seven in the week :)
[08:48:40] <stefancrs> chocolate pizza? wtf
[08:48:40] <dstorrs> yeah, that's the day to cheat like there's no tomorrow
[08:48:50] <stefancrs> well, I do, on every cheat da
[08:48:50] <stefancrs> y
[08:48:52] <stefancrs> so....
[08:49:02] <stefancrs> keep losing weight and gaining muscle though!
[08:49:08] <dstorrs> regular pizza dough with chocolate, then marshmallow or etc on top. sounds weird. utterly amazing
[08:49:24] <stefancrs> sounds... a lot.
[09:48:27] <rudiX> how to query mongodb db and display documents where {field: ... is set
[09:49:13] <dstorrs> rudiX: db.coll.find({ field : { $exists : true }})
[09:49:43] <rudiX> thank you very much dstorrs
[09:49:48] <dstorrs> np
[09:55:33] <Bilge> xDDDDDDDD
[10:59:48] <ub|k> i am getting an assertion "assertion 13026 geo values have to be numbers"
[11:00:17] <ub|k> does this mean i have objects in my db that have non-numerical geo values?
[11:28:09] <ub|k> and in that case, is there a way i can find the culprit?
[11:59:16] <ravana> hi
[12:00:18] <ravana> my mongo shell shows database size is 0.2 GB
[12:00:42] <ravana> but my actual data size is around 4 mb
[12:00:59] <ravana> can anyone help me to understand?
[12:01:57] <ravana> i am using ubuntu 64
[12:02:36] <ravana> mongo version is 2.0.2
[12:02:43] <ravana> any help?
[12:32:43] <natim> Hello guys
[12:32:55] <natim> I started my first mongodb today
[12:33:04] <natim> And I have a question for you : http://stackoverflow.com/questions/10960926/distinct-group-by-using-mongodb-with-pymongo
[12:33:41] <natim> I would like to do a similar request on mongodb to this mysql query SELECT DISTINCT pin, value FROM mesh_captors WHERE arduino = 203 GROUP_BY pin ORDER BY date DESC
[12:34:42] <natim> For now I have this : db.mesh_captors.group(key=['pin', 'value'], condition={'arduino': int(arduino_id)})
[12:36:26] <natim> But I now want to keep only the last value order by {date: -1}
[12:36:42] <natim> I guess, I need to use the reduce fonction to do so
[12:40:18] <natim> What about something like this : https://www.friendpaste.com/2r2D79DYoaV3CuSOUmGdLr
[12:51:36] <omid8bimo> hey i have a question. i have a replicaset, im trying to dump my db in secondary server but i get this error: assertion: 13106 nextSafe(): { $err: "not master and slaveok=false", code: 13435 }
[12:51:41] <omid8bimo> can someone guide me?
[13:01:55] <trax> hi
[13:07:02] <trax> I inserted "corrupted" bson in my mongodb base and now I am getting this http://pastebin.com/k6HJiAkK
[13:07:20] <trax> Is it normal that mongodb server accept such bson?
[13:16:30] <omid8bimo> hey i have a question. i have a replicaset, im trying to dump my db in secondary server but i get this error: assertion: 13106 nextSafe(): { $err: "not master and slaveok=false", code: 13435 }
[13:42:07] <natim> trax, Actually it looks like a bug
[13:42:20] <natim> Which driver did you used ?
[13:46:25] <trax> natim: the one I am developping (in vala)
[13:47:08] <trax> I know there is a bug inside my driver but it shouldn't impact that much the db
[13:50:47] <omid8bimo> anyone?
[13:52:30] <natim> trax, actually I think your driver must take care of this validation
[13:53:11] <natim> MongoDB is considering getting and having valid BSON values
[13:53:37] <natim> omid8bimo, can you explain a bit more maybe ?
[13:54:35] <omid8bimo> natim: im trying to dump a database off my secondary server in a replicaset and i get this error
[13:55:16] <natim> I just start using MongoDB today don't know how replicaset is working :s
[13:56:34] <omid8bimo> i think its a bug.
[13:56:47] <omid8bimo> just upgraded from 2.0.0 to 2.0.5 and it worked!!
[13:57:51] <trax> natim: ok, so if someone is on the same network as my db, it can just crash it \o/ . Yes there is authentication (on a none secure stream):
[14:23:33] <Goopyo> Q: If you have hundreds of users making trades and each trade has a P/L and you want to query by three things: user, the item traded, and the period of the trade how would you structure the data best?
[14:59:46] <ub|k> shouldn't the order of the elements in a bson document be irrelevant?
[15:00:14] <ub|k> i found a query expression which behaves differently depending on the order of the elements inside a JS object
[15:01:29] <multiHYP> hi
[15:18:04] <pranavk> installing mongodb and its is shooting so many selinux warnings man, what is this ?
[15:18:21] <pranavk> i never encountered such a software shooting up so many selinux
[15:19:51] <dstorrs> pranavk: do you have a question we can help with, or do you just need to vent? if the former, I can try to help.
[15:20:55] <pranavk> dstorrs: yes i want to know if it is the usual problem with mongodb or its just my system that it weird ?
[15:23:58] <dstorrs> pranavk: I haven't used selinux myself, so I don't know. a quick Google turns up only a few number of hits for this search: +mongo +selinux
[15:24:28] <dstorrs> +mongodb, rather.
[15:24:38] <dstorrs> that seems to suggest that it's unusual
[15:24:53] <dstorrs> you might do a general package upgrade before installing mongodb
[17:26:04] <Electron> hello
[17:26:18] <Electron> i am new to mongodb
[17:26:32] <Electron> I just downloaded it
[17:26:39] <Electron> and installed it on my mac
[17:26:54] <Electron> I was wondering why its not working
[17:27:31] <Electron> at Jun 9 13:24:48 Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84
[17:27:38] <Electron> this is the error im getting
[17:27:44] <dstorrs> did you run it?
[17:27:57] <Electron> im trying to run it
[17:28:11] <Electron> first i typed mongod
[17:28:22] <Electron> then i got this output
[17:28:31] <dstorrs> and then in another window, you type 'mongo'
[17:28:45] <Electron> mongod --help for help and startup options
[17:28:45] <Electron> Sat Jun 9 13:26:07 [initandlisten] MongoDB starting : pid=1015 port=27017 dbpath=/data/db/ 64-bit host=Zakerias-MacBook-Pro-2.local
[17:28:45] <Electron> Sat Jun 9 13:26:07 [initandlisten] db version v2.0.6, pdfile version 4.5
[17:28:45] <Electron> Sat Jun 9 13:26:07 [initandlisten] git version: nogitversion
[17:28:45] <Electron> Sat Jun 9 13:26:07 [initandlisten] build info: Darwin gamma.local 11.3.0 Darwin Kernel Version 11.3.0: Thu Jan 12 18:48:32 PST 2012; root:xnu-1699.24.23~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49
[17:28:45] <Electron> Sat Jun 9 13:26:07 [initandlisten] options: {}
[17:28:45] <Electron> Sat Jun 9 13:26:07 [initandlisten] exception in initAndListen: 10296 dbpath (/data/db/) does not exist, terminating
[17:28:46] <Electron> Sat Jun 9 13:26:07 dbexit:
[17:28:46] <Electron> Sat Jun 9 13:26:07 [initandlisten] shutdown: going to close listening sockets...
[17:28:47] <Electron> Sat Jun 9 13:26:07 [initandlisten] shutdown: going to flush diaglog...
[17:28:47] <Electron> Sat Jun 9 13:26:07 [initandlisten] shutdown: going to close sockets...
[17:28:48] <Electron> Sat Jun 9 13:26:07 [initandlisten] shutdown: waiting for fs preallocator...
[17:28:51] <dstorrs> don't paste
[17:29:07] <dstorrs> use the pastie link in the header
[17:29:49] <dstorrs> do you see the obvious exception in the output?
[17:30:29] <Electron> http://pastie.org/4057302
[17:30:56] <Electron> what was the exception
[17:30:59] <dstorrs> Electron: look at the output you pasted. What is the big glaring problem right in the middle of it?
[17:31:00] <Electron> in the output
[17:31:09] <dstorrs> the one that says "exception" ?
[17:31:43] <Electron> exception: connect failed
[17:31:57] <Electron> so why is failing
[17:32:11] <Electron> i used MacPort to install it
[17:32:16] <Electron> and also homebrew
[17:32:25] <dstorrs> look at the spew you dumped in channel. smack in the middle of it is your problem.
[17:32:58] <Chillance> 10296 dbpath (/data/db/) does not exist, terminating
[17:33:02] <Chillance> create that
[17:33:21] <Electron> ok
[17:33:46] <Electron> so i just need to do: mkdir -p /data/db
[17:34:03] <Chillance> yea
[17:34:19] <Chillance> or
[17:34:26] <Chillance> you can use --dbpath
[17:34:36] <Chillance> to change the location
[17:34:37] <LesTR> change option will be better :P
[17:35:05] <Electron> whats the command i need to type
[17:35:22] <multi_io> can I execute a sequence of JS statements (reads and writes) with mongodb's normal rw lock enabled?
[17:35:51] <multi_io> (so all other writes are blocked during that time)
[17:35:58] <dstorrs> Electron: just do the simple thing. mkdir -p /data/db
[17:36:18] <Electron> ok done
[17:36:35] <Electron> so i guess i need to type: mongod
[17:36:38] <Electron> then monog
[17:36:41] <Electron> mongo
[17:36:41] <dstorrs> Worry about changing config options for hygenie and platform-appropriateness is relevant for a production system, not testing
[17:36:43] <Chillance> hopefully there will not be a permission issue :)
[17:37:29] <Chillance> btw, does there really ALWAYS have to be a _id?
[17:37:45] <dstorrs> multi_io: I don't understand the question. you basically want to have only one process that's able to execute writes?
[17:37:48] <Chillance> what if that is not enough for uniqueness?
[17:38:19] <dstorrs> Chillance: there is always an _id, but it can be things other than an ObjectId. e.g., an object
[17:38:50] <dstorrs> plus, you can set unique indices that aren't related to _id
[17:39:38] <Chillance> dstorrs, hmm, well, the thing is, that the key Im will be using is larger than that (I think), so if _id is always there, it may not be unique eventually
[17:40:07] <dstorrs> Chillance: can you spell out the actual situation?
[17:40:55] <Chillance> dstorrs, say I want to use a 512 bit hash as the key, isnt that larger than the _id?
[17:41:25] <dstorrs> ok...so just have that be the value of your _id column
[17:41:30] <dstorrs> what's the problem?
[17:41:40] <dstorrs> (not being snarky, I don't understand)
[17:41:50] <Chillance> oh
[17:41:58] <Chillance> so, I set the _id to that?
[17:42:04] <dstorrs> yep
[17:42:26] <Chillance> ahh, ok, interesting.. I had a separate name for it
[17:42:43] <Chillance> so, is that how its done?
[17:42:58] <dstorrs> yep
[17:43:17] <Chillance> I liked a different name, to describe what it is, but I guess this can work too
[17:44:02] <Chillance> what will happen if I don't use _id, but my name and create an index for it... _id will be kinda redundant?
[17:44:12] <dstorrs> yep
[17:44:54] <Chillance> redundant, but always there? also, what happens then if there are two with the same _id? doesnt matter I suppose
[17:45:21] <Chillance> ok, but I think I get it now. use _id :)
[17:46:10] <multi_io> dstorrs: well... what I really want is versioned documents. I'll think about it some more.
[17:46:16] <Chillance> Im also interested in the more indepth info on how the indexing works for strings... use first characters for the Btree?
[17:46:22] <dstorrs> Chillance: yes, I just checked the docs, and an ObjectID is only 12 bytes
[17:47:04] <dstorrs> multi_io: if you spell out your situation, I'll try to suggest something
[17:47:13] <Chillance> dstorrs, yea, that may not be enough in the end, but thanks for the info! I guess I will set the hash to the _id instead
[17:47:24] <dstorrs> Chillance: so, a 512-bit will overwrap an ObjectID
[17:48:49] <Chillance> dstorrs, yea. its an interesting approach. :)
[17:48:50] <dstorrs> Chillance: sounds a better plan.
[17:49:16] <dstorrs> as to the string indexing...I'm afraid I don't know
[17:50:03] <Electron> ping dstorrs:
[17:50:17] <dstorrs> pong
[17:50:26] <Chillance> dstorrs, I though I could use some other name as a unique key, and dont use _id, but now I get it as the one to use as the unique key, where I set the key. cool!
[17:50:28] <Electron> ok now im getting an error about "no journal files present"
[17:51:12] <Electron> pastie.org/4057401
[17:51:18] <dstorrs> Chillance: you /can/ have other keys be unique. they will enforce uniqueness on those constraints...but you can't get away from the _id
[17:51:31] <Chillance> dstorrs, sure
[17:51:51] <Electron> dstorrs: take look at the link
[17:52:02] <Electron> what is this journal file
[17:52:09] <Electron> and how do i create one
[17:52:14] <at133> Hi, I have a document with two counters. I would like to increment them independently. when I use {$inc {x:1, y:0}} and {$inc {x:0,y:1}} it creates new documents. Is there a way to increment them independently in the same document?
[17:54:37] <dstorrs> Electron: when you write to Mongo, it stages the writes in RAM for speed. periodically it flushes them to the actual data files
[17:54:58] <Electron> oh
[17:54:58] <Electron> ok
[17:55:08] <dstorrs> but if the process dies before a write happens, the writes are lost
[17:55:25] <Electron> ok
[17:55:37] <dstorrs> this is bad, but you can't just say "write everything immediatelY" -- that kills performance, as inserting into a BTree can be slow
[17:56:01] <Electron> i c
[17:56:17] <dstorrs> so, changes get appended to the journal files almost immediately about being issued. then, when the next write trans happens, they are written to the real data files.
[17:56:47] <dstorrs> since this is your first startup, you have no journal files yet. for some reason, it thinks that things are not valid so it's trying to do a recovery
[17:57:04] <dstorrs> that means "hit the journals and replay them into data files to catch up"...but there are no journals
[17:57:39] <dstorrs> I don't know what your specific issue cause is; I'm doing some googling. but that's the general problem
[17:57:40] <Electron> ok
[17:58:06] <Electron> i know im looking on google too
[17:58:20] <Electron> im gonna try and reinstall mongodb
[17:58:24] <Electron> again
[17:58:27] <dstorrs> rm -rf /data/db, then follow these dirs: http://shiftcommathree.com/articles/how-to-install-mongodb-on-os-x
[17:58:28] <Electron> no problem
[17:58:37] <Electron> ok
[17:58:37] <dstorrs> see if that works for you
[17:59:02] <dstorrs> btw, be certain you are ALWAYS ALWAYS ALWAYS running the 64-bit version of Mongo.
[17:59:17] <dstorrs> the 32-bit version has significant data limitations
[17:59:20] <Electron> how to i make sure
[17:59:28] <Electron> is there a command
[17:59:33] <dstorrs> just be sure you download the 64-bit version
[17:59:38] <dstorrs> you're fine from there.
[17:59:38] <Electron> ok
[17:59:42] <Electron> ok
[17:59:48] <dstorrs> let me know how it goes.
[17:59:52] <Electron> is there a command to uninstall mongodb
[18:00:16] <dstorrs> It likely depends on how you installed it
[18:00:17] <Electron> or do just install following these instructions
[18:00:32] <dstorrs> at133: you can use findAndModify
[18:00:35] <Electron> i installed with brew install mongo
[18:00:54] <dstorrs> I haven't used homebrew, but there's probably a 'brew uninstall mongo'
[18:02:48] <at133> dstorrs: You can't just do it with update?
[18:03:38] <dstorrs> at133: what exactly are you trying to do? show me a start doc and an end doc that you want to achieve
[18:05:46] <at133> dstorrs: http://pastie.org/4057454
[18:07:57] <dstorrs> at133: ok...that looks like you just updated one field. Is that what you mean by "updating fields indepedently"? if so, look to the '$inc' operator on update()
[18:09:21] <at133> dstorrs: Yes, but when I do that with {$inc {view_count:0, click_count:1}} it creates a new document for every increment. So incrementing one works fine, but if I try to increment the other I get two documents.
[18:11:20] <dstorrs> let me test something...
[18:14:28] <at133> dstorrs: I deleted my collection and restarted mongo. Now it works. Thanks for the help.
[18:14:40] <dstorrs> at133: np
[18:14:51] <dstorrs> what was the exact command you were using?
[18:16:08] <dstorrs> db.coll.update({$inc {view_count:0, click_count:1}}) ?
[18:16:58] <at133> collection.update({page: 0 }, {"$inc": {view_count:1, click_count:0}},{upsert:true})
[18:17:48] <at133> collection.update({page: 0 }, {"$inc": {view_count:0, click_count:1}},{upsert:true})
[18:18:02] <dstorrs> good to know. I haven't had occasion to use 'inc' and the docs are totally clear
[18:18:26] <at133> Its a very cool feature
[18:19:50] <dstorrs> yeah, really is
[18:21:41] <dstorrs> er..."not totally clear" is what I was trying to say. hopefully obviously.
[18:31:38] <Electron> dstorrs
[18:31:45] <Electron> thanks its working nog
[18:31:52] <Electron> its working now
[18:42:51] <dstorrs> Electron: you're welcome.
[18:42:58] <dstorrs> glad you got it
[20:43:30] <spaam> yoyo. is there any package with 2.1 for debian testing ?
[20:44:30] <spaam> the repo on http://downloads-distro.mongodb.org/repo/debian-sysvinit is kinda broken.
[20:44:50] <spaam> W: Failed to fetch gzip:/var/lib/apt/lists/partial/downloads-distro.mongodb.org_repo_debian-sysvinit_dists_dist_10gen_binary-amd64_Packages: Hash Sum mismatch
[21:38:20] <gigo1980> hi is it posible to change the chunk size of an sharded cluster at runtime ?
[21:41:53] <multiHYP> night all
[22:45:02] <Electron> hey
[22:45:24] <Electron> does anyone know anywhere that i can get sample data
[22:45:30] <Electron> to test mongodb out
[23:54:23] <clarle> Huh, is there any reason why mongoose forces _id to be a ObjectID?
[23:54:37] <clarle> I thought it was allowed to be of any type in MongoDB.