PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 29th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:20:03] <tmchaves> Hi. I have the following collection {book_id:1, commands:[{command:"command_1", sent:false}, {command:"command_2", sent:false}]}. I'm wondering if there is a way to return all commands that have sent == true. I've tried reading the docs but couldn't find it. Anyone could help?
[02:21:42] <spg> afaik you can't grab individual subdocuments
[02:22:25] <spg> that is, you can match against subdocuments within your documents, but you'll get the document back (but you won't get documents that have no matching subdocuments)
[02:24:31] <tmchaves> spg, ok I thought there was a way of specifying a recursive matcher or something, such as find({book_id:1}, {commands:1}) that returns only part of the document
[02:26:10] <tmchaves> is it possible to recover only documents with sent == true? Could you help me doing it? db.commands.find({commands:{sent:true}}) does not work
[02:26:24] <php10> doing an upsert with these two documents ($inc on the sum fields) http://pastebin.com/WJ66u2nC ... identical documents, upsert creates two, why?
[02:27:03] <spg> oh hurh
[02:27:05] <spg> https://jira.mongodb.org/browse/SERVER-828
[02:27:42] <spg> oh wait, the version it's fixed in isn't released yet
[02:28:38] <tmchaves> is it a bug?
[02:29:23] <spg> no, it's a new feature
[02:30:42] <spg> I think every time I peer into this channel someone new is asking about subdocuments
[02:30:57] <spg> so maybe they got the message
[02:31:09] <tmchaves> it's not stable, but I'll give it a try
[02:31:33] <tmchaves> what would be the correct syntax for that find operation that I'm trying to do?
[02:33:27] <spg> I'm not sure. doesn't look like it's outlined here
[02:33:54] <spg> hrm
[02:34:01] <spg> look somewhere near the middle of the comments
[02:34:15] <spg> there's a snippet that shows how you can use the aggregation framework
[02:34:19] <spg> maybe that'll do what you need
[02:34:39] <spg> "we have the aggregation framework now (2.1) which would let you filter parent documents independently of embedded documents and also use include/exclude on the parent document fields while still returning only matched embedded documents."
[02:34:44] <spg> sounds like what you need
[02:39:32] <tmchaves> okay
[02:39:39] <tmchaves> gonna take a look at it
[02:39:41] <tmchaves> thanks dpg
[02:39:43] <tmchaves> spg
[06:38:28] <xarthna> Using MongoDB 2.0.6. Does count run async? I keep getting undefined return value when using db.collection.count()
[06:41:16] <xarthna> Context: Using express app.get('/blog', function(req, res) { var post_count = db.posts.count(); db.posts.find.limit(5).sort({_id: -1}, function(err, posts) { console.log(post_count); //undefined }); });
[06:41:25] <xarthna> mongojs driver
[06:42:36] <xarthna> Also, in the mongo shell, db.posts.count() returns an int
[07:56:38] <[AD]Turbo> hola
[08:22:16] <fredix> hi
[08:23:02] <fredix> it seems that I have some buffer overflow with auto_ptr<DBClientCursor> cursor
[08:23:54] <fredix> The BSONObj returned by the cursor is overwriter by a memory allocation
[09:24:30] <fredix> what is the difference between BSONObj::getOwned () and BSONObj::copy () ?
[10:13:39] <kali> fredix: getOwned returns a bson object that owns its buffer. if the bsonobject you call it on already owns its buffer, it will return itself. if not, it will call copy()
[10:14:00] <kali> fredix: it's in bson-inl.g, around line 225
[10:14:04] <kali> .h of course
[11:18:36] <mkg_> hi
[11:18:50] <mkg_> i have a collection with elements that looks like this: http://pastie.org/4170861
[11:19:24] <mkg_> i would like to change the order of the elements in the "c" array, is there some quick way to do this?
[11:19:48] <mkg_> on the whole collection of course
[11:22:36] <mkg_> "c" represents [latitude, longitude], but mongodb documentation suggests that it's better to have it as [longitude, latitude] so i'd like to change the order on the whole collection
[11:23:53] <mkg_> collection holds around 10^9 elements like this so i'd rather like to avoid reinserting all data again...
[12:49:44] <souza> Hello guys, i'm having another problem with mongoDB and C, i have a collection that have an array of another object, and this object have a date attribute, i have to to get all elements of this collection that this date is after X minutos since now, my code is here >> http://pastebin.com/QvtsuEYq i got a query that works in mongoDB shell, follow it >> db.Service.find({ "vm.last_date" : { $lt : d } }); (d is a date value)
[13:08:37] <souza> Hello guys, i'm having another problem with mongoDB and C, i have a collection that have an array of another object, and this object have a date attribute, i have to to get all elements of this collection that this date is after X minutos since now, my code is here >> http://pastebin.com/QvtsuEYq i got a query that works in mongoDB shell, follow it >> db.Service.find({ "vm.last_date" : { $lt : d } }); (d is a date value)
[13:19:12] <e-dard> Hi all, sorry for the annoying hand-wavey question - I realise there is no solid answer… I'm noticing in my logs that insertions of documents into a collection with about 1.8 million documents are taking between 150 - 300 ms. Is this reasonable? Can I speed this up? The documents are small, avgObjSize is about 215 bytes
[13:20:02] <domeh> what kind of indexes do you have in the collection?
[13:20:44] <domeh> i guess that most of the time would be spent on updating indexes
[13:21:04] <e-dard> I have 4 indexes
[13:21:19] <e-dard> 3 indexes are strings
[13:21:22] <e-dard> 1 is date
[13:22:21] <e-dard> Is the operation in the best case CPU bound then?
[13:24:50] <NodeX> are your inserts index bound?
[13:25:02] <NodeX> (upserts or th like)
[13:25:29] <souza> Hello guys, i'm having another problem with mongoDB and C, i have a collection that have an array of another object, and this object have a date attribute, i have to to get all elements of this collection that this date is after X minutos since now, my code is here >> http://pastebin.com/QvtsuEYq i got a query that works in mongoDB shell, follow it >> db.Service.find({ "vm.last_date" : { $lt : d } }); (d is a date value)
[13:25:56] <e-dard> NodeX: is that directed at me?
[13:25:59] <NodeX> yeh
[13:26:06] <e-dard> I don't know if they are index bound
[13:26:13] <e-dard> I just felt that they are a little slow
[13:26:17] <NodeX> are they upserts?
[13:26:25] <e-dard> No they are all new documents
[13:27:03] <NodeX> are they safe inserts?>
[13:27:15] <e-dard> NodeX: aha! And so they are...
[13:27:31] <NodeX> take the safe mode off if you can , you will save some return time
[13:29:40] <NodeX> else you will probably want to shard the system if it's becoming an issue
[13:33:02] <e-dard> NodeX: do you know if inserts only show up if they take a certain time?
[13:33:16] <e-dard> *show up in the logs
[13:33:41] <JoeyJoeJo> I can find documents using geoNear. How can I select documents that are within a polygon instead of a radius?
[13:34:00] <NodeX> there is a slow query time, I thing the default is over 100ms but I could be wrong - check the docs.. it's configurable iirc
[13:34:26] <e-dard> NodeX: great. Well, the inserts aren't showing up now I turned safe off, so much be much faster now :)
[13:35:16] <NodeX> JoeyJoeJo
[13:35:29] <NodeX> polygonA = [ [ 10, 20 ], [ 10, 40 ], [ 30, 40 ], [ 30, 20 ] ] .... db.places.find({ "loc" : { "$within" : { "$polygon" : polygonA } } })
[13:35:34] <NodeX> (from the docs ;)
[13:35:40] <JoeyJoeJo> Awesome, thanks!
[13:36:00] <NodeX> safe mode waits for a write to disk or X slaves if I recall correctly e-dard
[13:36:08] <NodeX> you should see a massive improvement
[13:36:20] <e-dard> Great. Thanks.
[14:07:12] <_johnny> JoeyJoeJo: i was actually writing a blog post about that just now :P both places in polygon and reverse (point in polygon)
[14:10:51] <souza> someone can have a look here > https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/NyMNr_jo8Fg tks
[14:19:28] <laks> hello,I would like to know how to create collection via c-api? Is that possible? Thanks for any help.
[14:19:47] <NodeX> add a document to it
[14:20:10] <NodeX> collections and databases are automagically created if they're empty
[14:20:23] <NodeX> s/empty/dont exist
[14:22:26] <laks> okay..whats the api for creating a empty database or collection ?
[14:22:47] <laks> do i need to use bson calls directly?
[14:23:04] <NodeX> read the docs
[14:23:23] <laks> okay will check again.
[14:23:34] <NodeX> http://api.mongodb.org/c/current/tutorial.html
[14:25:33] <laks> thanks..reading it.
[14:32:40] <Deathspike> NodeX: Picking up where we left of, say I do something alone the lines of "db.messages.find({ user_id: { $in : [1,2,3,4,5] }});" to retrieve messages of my friends, could I also get additional information if a message contains a "page_id" to retrieve the related page information (i.e. <ThisGuy> liked <thispage>)?
[14:34:10] <NodeX> separate query unfortunately
[14:34:24] <NodeX> unless you save that information with a message
[14:34:39] <NodeX> personaly I would do somehting like this ....
[14:34:48] <Deathspike> I don't have to iterate through each message and query, do I? I could do a single query for all grabbed messages?
[14:35:09] <NodeX> have an "actions" collection with a type, id, action and query it like this..
[14:35:54] <NodeX> db.actions.find({uid: { $in : [1,2,3]}, type : {$in : ["message","like"]}});
[14:36:12] <NodeX> that will get you any user of 1 or 2 or 3 with a like or a message
[14:36:21] <NodeX> basically anything your friends have done
[14:38:02] <NodeX> the beauty of schemaless is you can store as many or as less fields as you want... a "type=message" could have 6 extra fields where a link might have a link
[14:38:11] <NodeX> just parse it out in your app ;)
[14:38:57] <Deathspike> NodeX: Ok I understand that, but they liked *something*, I'm concerned about getting that reference. I.e. I could save the page you liked in the action collection as a simple name and id to link to, but what if the page name changed? Then the like would be referencing something that became incorrect.
[14:39:19] <NodeX> how often do page names change ?
[14:39:33] <NodeX> this is what you have to consider .. speed v's flexability
[14:39:37] <Deathspike> Not very often, but when something can go wrong, it most certainly will.
[14:39:43] <NodeX> sorry speed & scalability
[14:40:11] <NodeX> so on those "not very often" times it's easy to do a quick query to update everything where page_id = foo
[14:40:33] <Deathspike> Oh god I feel like a retard now.
[14:41:01] <NodeX> nosql vs RDMBS is a very different way of thinking
[14:41:11] <Deathspike> Still, duplicating data is something, coming from relation db, that I frown upon. It's faster, hands down, but it feels jicky. Isn't that an excessive waste of data?
[14:41:16] <Deathspike> Yeah, I must learn that :)
[14:41:17] <NodeX> there is no relations and no joins so you have to model very differently
[14:41:37] <NodeX> but how much data is it really ?
[14:41:56] <NodeX> you dont waste it in RDBMS for the following reasons ... scaling is a nightmare and performance is key
[14:42:10] <NodeX> on mongo scaling is easy and it's still fast
[14:42:36] <Deathspike> Not that I'll ever get to a scale like FB or Twitter, but let's say a each page is liked a few thousand times. That's quite a bit of excessive data, but that is something that is usually neglected in favor of speed and scalability in the nosql-community then?
[14:42:39] <NodeX> if your data model doesn't suit these sparse updates then model it differentlt
[14:43:51] <NodeX> It's the way I do things personaly because it keeps my app fast and scalable
[14:44:23] <NodeX> considering there is no joins and the alternatives are often slower methods i would hazrad that's how most people tackle this in nosql
[14:45:53] <Deathspike> Ok, I'll have to get used to this. I think I can model my entire app with these suggestions, except for one lingering question: searching. Let's say I want categories for pages and want to allow searching on categories and keywords in titles, how would I do about this?
[14:46:20] <Deathspike> I think I can make a collection with genres and add page_id's in it, and search through that to match pages quickly, but title searching?
[14:46:24] <NodeX> what kind of seraching are we talking
[14:46:35] <NodeX> sql LIKE ?
[14:46:43] <Deathspike> LIKE %asd% in a title
[14:46:51] <NodeX> how large are the titles
[14:47:06] <Deathspike> ~14k entries with a title between 10 and 100 characters.
[14:47:28] <NodeX> mongo uses regex but it's not very efficient without a prefix
[14:48:02] <NodeX> what a few people do (including me in the past) is lowercase and split on space the string and store it in a separate field, index that and use it for regex
[14:49:09] <Deathspike> You mentioned that's in the past, what has your current preference?
[14:49:42] <NodeX> I use SOLR nowerdays for keyword searching becuase that's what it's designed for
[14:49:59] <NodeX> I use mongo for basic seraching (ranges, $in etc)
[14:50:10] <NodeX> (and as my datastore)
[14:51:10] <Deathspike> Is SOLR capable of doing approximate matching too?
[14:51:18] <NodeX> yep
[14:51:21] <Deathspike> i.e. like on imdb, getting a 95% hit
[14:51:32] <NodeX> it has slop
[14:51:42] <NodeX> levenshtien based
[14:52:02] <NodeX> Deathspike~5 ... wold match Deahtskike
[14:52:06] <NodeX> or w/e
[14:52:27] <Deathspike> That is very, very cool. I once tried implementing that in PHP, bloody nightmare.
[14:52:36] <NodeX> me too lol
[14:52:59] <NodeX> SOLR is a great acompanyment to Mongo
[14:53:25] <NodeX> but it's an index and as such is volitile so you need a datastore on the end of it
[14:53:28] <Deathspike> How do you manage title/id relations between the two platforms?
[14:54:00] <NodeX> SOLR has a concept of uniqueKey ... I just set it to the Mongo ObjectId() ... (similar to Primary key in SQL
[15:01:52] <Deathspike> NodeX: I just read a little about Lucene; it seems this is more focused on "just searching" and not on extra features, so this could be used instead of SOLR?
[15:02:15] <NodeX> SOLR is lucene based
[15:02:20] <NodeX> lucene is the structure
[15:02:45] <Deathspike> I'm referring to this, http://lucene.apache.org/core/
[15:03:22] <NodeX> me too, SOLR is a "lucene" based search engine
[15:03:32] <NodeX> just like node.js is a javascript based whatever it is
[15:04:00] <NodeX> not strictly true but similar...
[15:04:17] <NodeX> mysql is an SQL based RDBMS ... MSSql is an SQL based RDBMS
[15:05:07] <NodeX> there are other lucene based things .. elastic search for one .. a little green in my opinion at the moment but it's gaining traction
[15:06:37] <horseT> hi, I need to do that using php driver : db.ug.distinct( "u" , { "done":1, ddd:{$gt:20120103, $lt:20120105} } ).length
[15:06:52] <Deathspike> I don't need a whooping lot of features, just search. Well point being, I should be able to model everything and get a search provider up and running someone. It's going to be quite complex compared to single deploy stuff I used to work on, tho.
[15:07:04] <horseT> but no idea how to set the ".length" :(
[15:07:45] <NodeX> Deathspike : you wont be able to mix SOLR and Mongo queries unfortunately
[15:07:54] <NodeX> horseT : set the length ?
[15:10:54] <JoeyJoeJo> In the python driver I run db.collection.find() and I get a pymongo.cursor object. Is there a quick way to turn that into JSON?
[15:12:24] <NodeX> in the PHP driver you have to loop the cursor so I imagine its the same for all drivers /
[16:19:19] <wereHamster> is $in guaranteed to return the documents in the same order as the IDs in the $in array?
[16:19:33] <wereHamster> (so I cna zip the results with something..)
[16:25:44] <NodeX> I dont think it's garunteed
[16:26:45] <wereHamster> bummer :(
[16:54:28] <sir_> am i correct in saying that the file data in gridfs chunks is the pure binary and can be searched against as such, and it's only base64 encoded when displaying results from the shell?
[17:56:48] <lahwran> what would be the most efficient way to store a versioned wiki?
[17:57:20] <lahwran> I'm currently thinking a multidimensional sparse mapping
[17:57:29] <lahwran> ie, documents which have a "key" and a "value"
[17:57:51] <lahwran> the key would be a subdocument, where each sub-key represents the location along a dimension
[17:59:15] <anrope> Hi, I'm trying to create an index to expire data from a collection. I'm working in python, using pymongo. Does the 'ttl' keyword argument to Collection.{create,ensure}_index() correspond to the expireAfterSeconds paramater mentioned in the mongo docs here: http://docs.mongodb.org/manual/tutorial/expire-data/ ? The documentation makes it sound like the pymongo ttl parameter just controls how long ensure_index waits to do a creat
[17:59:15] <anrope> e_index.
[18:15:54] <mmlac> Does anyone here have a deep understanding of MONGOID?
[18:20:27] <FerchoDB> A customer has an array of orders. Each order has an array of OrderDetails, and each OrderDetail has a prodId. is there a way, server side, to query all the customers that have an order with an orderdetail with a prodId = 2 ?
[18:20:45] <FerchoDB> I'm sorry, corrected the question and missed the Hello!
[18:20:47] <FerchoDB> :)
[18:21:24] <FerchoDB> I mean, query all the customers who ever bought a product with productId = x
[18:22:30] <chubz> hi, mongo noob here. how do i findthe hostname or IP address of mongod or mongos process ?
[18:23:44] <drPoggs> chubz: you mean the IP address of the server?
[18:24:41] <FerchoDB> you are already connected to a mongo but you don't know the host? try db.serverStatus()
[18:24:47] <FerchoDB> it will show you at least the hostname
[18:25:26] <FerchoDB> I don't know how to get the IP, but if you are already connected you can just see netstats
[18:28:58] <lohkey> when i insert a doc into a collection. is the object id created on the server or instantly in the driver?
[18:31:58] <chubz> drPoggs: yes (sorry for late reply) i mean the IP of the server
[18:36:41] <lohkey> set ban on McNulty?
[18:36:47] <drPoggs> chubz: If it's UNIX, "ifconfig"
[18:37:07] <lohkey> drPoggs: do you know the answer to my question?
[18:37:38] <dogn_one> hithere
[18:38:01] <jstout24> what's the best gui for a mac?
[18:38:25] <jstout24> besides "shell"
[18:38:52] <dogn_one> is it possible to set the index value at inserting time? please
[18:39:32] <chubz> drPoggs: thanks. also how do i check the port number for the mongod processes?
[18:40:08] <drPoggs> chubz: try 1261
[18:40:59] <chubz> drPoggs: is that just guessing? or is that default? is there a way i can actually check?
[18:42:59] <sir_> default should be 27017 for single mongod instance or a mongos instance
[18:43:14] <sir_> 27018 if you are doing replicaSets/sharding
[18:43:19] <sir_> 27019 for config servers
[18:43:48] <sir_> you can use 'netstat -an' to look at what ports the system is listening on
[18:44:15] <Fudge_> I'm kinda new to mongo. Say I have a document with a dictionary, how do I update that document to add a new element?
[18:44:23] <Fudge_> From the mongodb shell ^
[18:45:06] <sir_> db.collection.update({'some_key': 'some value'}, {$set: {'new_key': 'new value}})
[18:45:14] <Fudge_> oops
[18:45:18] <Fudge_> sir_: I mean the dictionary
[18:45:36] <Fudge_> say I have a dict with emails and I want to update that dict with another email
[18:46:05] <sir_> do you mean you have a document with a list of emails?
[18:46:08] <sir_> or a dictionary of emails?
[18:46:38] <Fudge_> a dictionary of emails
[18:46:41] <Fudge_> in a document
[18:47:04] <sir_> then what i said would work since the key you add the email to would have to be unique at some level
[18:48:13] <Fudge_> so if I did db.collection.insert({name: "test", emails = []})
[18:48:14] <Fudge_> then
[18:48:27] <sir_> so you have a list of emails then, not a dictionary =p
[18:48:35] <Fudge_> oh
[18:48:43] <Fudge_> list then xD
[18:48:46] <sir_> :)
[18:48:52] <Fudge_> Aren't they dictionaries in Python?
[18:49:16] <sir_> db.collection.update({'name': 'test'}, {$addToSet: {'emails': 'new email'}})
[18:49:22] <sir_> no, lists are not dictionaries
[18:49:32] <sir_> lists are similar to arrays (sort of)
[18:49:39] <sir_> dictionaries are key/value pairs
[18:49:55] <Fudge_> ah, sorry
[18:49:58] <Fudge_> got mixed up
[18:49:59] <sir_> you can change $addToSet to be something else like $push
[18:50:06] <sir_> depending on what type of behavior you want
[18:50:27] <sir_> there's a list of updating options on the mongo site if you want to see the others as well
[18:51:05] <Fudge_> yeah, might be nice as a future reference; URL?
[18:51:16] <sir_> one sec, let me get it for you
[18:51:34] <sir_> http://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperations
[18:51:46] <Fudge_> thanks
[18:51:47] <Fudge_> :F
[18:51:49] <Fudge_> :D*
[18:51:52] <sir_> no problem :)
[18:53:25] <dogn_one> is it possible to set the key value at inserting time? please
[18:54:20] <sir_> dogn_one: can you clarify? when you insert, you are setting all the key/value pairs you want in the document
[18:55:00] <dogn_one> sir_ sorry im talking about some index
[18:59:46] <tomlikestorock> in pymongo, where can I import ObjectId to create my own object id classes?I can't seem to find it...
[18:59:51] <dogn_one> happens i have a mess here, i have documents with a reference, some hexadecimal value, which maps to an integer. I want to avoid to order the documents, then calculate where it starts.
[18:59:57] <tomlikestorock> I'm trying to query by objectid
[19:00:17] <sir_> tomlikestorock: which version of pymongo?
[19:00:38] <tomlikestorock> 2.2
[19:00:46] <sir_> 'from bson import ObjectId'
[19:05:45] <patroy> Hi, I have a mongo database and I can't seem to be able to update records that are in a sharded collection. Any reason why that would happen? They insert fine, but I can't update
[19:06:05] <sir_> patroy: any errors?
[19:06:17] <patroy> not that I can see
[19:06:20] <patroy> I'm using ruby mongoid
[19:06:27] <patroy> its returning true when I update
[19:06:34] <patroy> but the value doesnt change
[19:07:45] <sir_> not familiar with the ruby driver, but you can try changing the update query to a find, and see how many documents are being returned
[19:07:55] <sir_> to ensure it's finding what you expect before updating
[19:08:27] <sir_> i know with sharded collections, if you don't use the _id, you need to include the shard key
[19:09:43] <dogn_one> _
[19:10:30] <patroy> sir_: ok I'll try with the mongo console first
[19:42:44] <jamilbk> if i have a document with an array field of tags, how can i find other documents that also have those tags in their tags array?
[19:44:26] <sir_> jamilbk: you can use $in
[19:44:36] <jamilbk> ahh ok thanks
[19:44:37] <sir_> and pass in the same array of tags
[19:44:38] <sir_> :)
[19:44:59] <jamilbk> was trying elemMatch, but couldn't figure out how to OR the tags together so to speak :-)
[19:45:23] <JoeyJoeJo> How can I run this find command in python? db.networks.find({ "loc" : { "$within" : { "$box" : [ [-50,-50], [50,50] ] } } })
[19:46:10] <JoeyJoeJo> I tried this but it didn't work. cursor = collection.find("loc" : { "$within" : { "$box" : [[ coords[0]+","+coords[1]],[coords[2]+","+coords[3]]] } })
[19:57:14] <ghost[]> .max and .min never seem to work for me... shouldn't this return the record I just inserted?
[19:57:30] <ghost[]> db.things.insert({a:2}); db.things.find().min({a:1})
[19:58:07] <ghost[]> instead i get "$err" : "no index found for specified keyPattern: {} min: { a: 1.0 } max: {}"
[19:58:39] <sir_> db.things.find({'a': {$gt: 0}})
[19:58:47] <elarson> I believe i have a race condition in my code and I'm curious how to help avoid it
[19:59:59] <elarson> it seems that we are trying to call update() with upsert=True and I think the pymongo client tries to update, fails, the second writer slips in, causing the next insert to fail...
[20:00:24] <elarson> that probably doesn't really make sense though... sorry for thinking aloud a sec
[20:01:28] <sir_> elarson: not sure it'll help
[20:01:29] <ghost[]> ohh sir_ min and max only search index keys
[20:01:33] <sir_> but try looking at $atomic
[20:01:40] <ghost[]> thanks for $gt
[20:01:44] <elarson> sir_: it might, thanks!
[20:01:46] <sir_> ghost[]: np :)
[20:13:03] <JoeyJoeJo> How can I find documents where a string != "asdf"? In MySQL I would do SELECT * FROM table WHERE StringField != "asdf";
[20:13:50] <wereHamster> $ne
[20:55:47] <littlen_1> sir_ this is what i was looking for collection.ensureIndex(new BasicDBObject("reference", 1), new BasicDBObject("unique", true)); I Guess, thanks
[21:05:06] <Goopyo> Q: Mongodb site insists that sharding is the primary method of speeding things up, but what if data size isnt your issue but the amount of traffic is. Surely replication with slave reads would give you a substantial boost?
[21:10:23] <sir_> replication with read_preference being set to SECONDARY gives you your processing power, however
[21:10:42] <sir_> by sharding you have simultaneous lookups on sections of your data being returned to you
[21:25:30] <WormDrink> hi
[21:25:45] <WormDrink> If I disable journal - can I loose more than 60 seconds of data ?
[21:27:04] <JoeyJoeJo> I can find all documents in a geographical rectangle using $box. Now how can I narrow my results to everything within the rectange where a = 1?
[21:32:00] <halcyon918> is there a reason why when I call {getLastError : 1, w : "veryImportant"} (which we have configured as tags on our replicas), the call just hangs (I'm in the mongo CLI testing this call)
[21:37:31] <WormDrink> If I disable journal - can I loose more than 60 seconds of data ?
[21:46:45] <dgottlieb> halcyon918: can you confirm the previous insert/update command actually replicated to all the members in veryImportant?
[21:47:02] <mmlac> Does anyone here have a deep understanding of MONGOID?
[21:47:37] <dgottlieb> halcyon918: you can also add a wtimeout: <time in milliseconds> to avoid the hang
[21:47:38] <halcyon918> dgottlieb: I'm just reading up on this… I didn't know it was a blocking call (but totally makes sense to me now)...
[21:47:55] <halcyon918> dgottlieb: I'll query each replica individually
[21:50:56] <cgriego> If I wanted to download the 2.0.2 deb package, where would I find that?
[21:52:05] <dgottlieb> WormDrink: your question is a little vague, but in general I would say you can't. Then again I wouldn't exactly bet everything on that answer :)
[21:56:25] <halcyon918> dgottlieb: I see the insert replicate to all three instances, but getLastError still hangs
[21:59:28] <rockets> Anybody ever run MySQL and MongoDB on the same server? Did you have any issues or weirdness?
[21:59:51] <halcyon918> w:'majority' seems to work, but w:'<my tag>' hangs
[22:00:29] <linsys> Rockets: now why would you ever want to do that
[22:00:42] <rockets> linsys: we have legacy stuff that depends on MySQL
[22:00:51] <linsys> rockets: get a new server?
[22:01:00] <rockets> they're currently separate
[22:01:05] <rockets> im just wondering if they really need to be
[22:01:18] <rockets> this is cloud stuff so we're paying monthly per server instance
[22:01:24] <linsys> rockets: mongodb uses mmaped files for storing data files so mysql and mongodb are going to fight for memory
[22:01:32] <rockets> linsys: mm good point
[22:01:43] <rockets> It was just a thought. I'll leave things the way they are.
[22:15:09] <JoeyJoeJo> Is it possible to search within search results?
[22:18:57] <dgottlieb> halcyon918: interesting, you sure the tag is setup right? no additional fourth server that's down? I haven't played with customizing tags before
[22:19:33] <halcyon918> dgottlieb: am I'm sure it's setup right? I have a DBA… so no :P
[22:20:00] <halcyon918> dgottlieb: at the moment, I'm ok with Majority + Journal Safe
[22:24:27] <dgottlieb> halcyon918: hah! well I'd love to hear anything weird you find if you do wind up poking more at it
[22:26:12] <halcyon918> dgottlieb: will do
[22:27:24] <cgriego> I can't find old downloads of the debian/ubuntu packages. Are those not archived anywhere? I can't find them.
[22:49:11] <willwork4foo> hi all... I get the feeling I'm a bit late to this party. I'm just discovering mongodb for the first time this week
[22:49:19] <willwork4foo> few years late I fear?
[22:49:27] <willwork4foo> I
[22:49:57] <willwork4foo> I'm looking for some documentation on translations and interfaces with other DB technology (like Oracle for example)